%0 Journal Article %J Survey Methodology %T A Bivariate Hierarchical Bayesian Model for Estimating Cropland Cash Rental Rates at the County Level. %A Erciulescu A.L. %A Berg E. %A Cecere W. %A Ghosh M. %B Survey Methodology %0 Journal Article %J Assessing Writing %D 2017 %T Similarities and differences in constructs represented by U.S. States’ middle school writing tests and the 2007 national assessment of educational progress writing assessment %A Mo, Y. %A Troia, G. A. %K Assessing Writing %K assessment %K writing %B Assessing Writing %V Volume 33 %8 07/2017 %G eng %U http://www.sciencedirect.com/science/article/pii/S1075293517300193 %9 Assessing Writing %! Similarities and differences in constructs represented by U.S. States’ middle school writing tests and the 2007 national assessment of educational progress writing assessment %& 48–67 %0 Journal Article %J Reading Horizons %D 2016 %T The Common Core Writing Standards: A descriptive study of content and alignment with a sample of former state standards %A Troia, G. A. %A Olinghouse, N. G. %A Wilson, J. %A Stewart, K. O. %A Mo, Y. %A Hawkins, L. %A Kopke, R.A. %B Reading Horizons %G eng %0 Journal Article %J JSM Proceedings. Survey Research Methods Section. Alexandria, VA: American Statistical Association. %D 2016 %T Evaluating Record Linkage Software for Agricultural Surveys %A Bellow M.E. %A Daniel K. %A Gorsak M. %A Erciulescu A.L. %B JSM Proceedings. Survey Research Methods Section. Alexandria, VA: American Statistical Association. %G eng %U https://ww2.amstat.org/MembersOnly/proceedings/2016/data/assets/pdf/389754.pdf. %& 3225-3235 %0 Journal Article %J Reading & Writing: An Interdisciplinary Journal %D 2016 %T Predicting Students’ Writing Performance on the NAEP from Student- and State-level Variables %A Mo, Y. %A Troia, G. A. %B Reading & Writing: An Interdisciplinary Journal %G eng %0 Journal Article %J Molecular Cell Proteomics %D 2015 %T Large-Scale Interlaboratory Study to Develop, Analytically Validate and Apply Highly Multiplexed, Quantitative Peptide Assays to Measure Cancer-Relevant Proteins in Plasma. %A Susan Abbatiello %A Birgit Schilling %A D.R. Mani %A L.I. Shilling %A S.C. Hall %A B. McLean %A M. Albetolle %A S. Allen %A M. Burgess %A M.P. Cusack %A M Gosh %A V Hedrick %A J.M. Held %A H.D. Inerowicz %A A. Jackson %A H. Keshishian %A C.R. Kinsinger %A Lyssand, JS %A Makowski L %A Mesri M %A Rodriguez H %A Rudnick P %A Sadowski P %A Nell Sedransk %A Shaddox K %A Skates SJ %A Kuhn E %A Smith D %A Whiteaker, JR %A Whitwell C %A Zhang S %A Borchers CH %A Fisher SJ %A Gibson BW %A Liebler DC %A M.J. McCoss %A Neubert TA %A Paulovich AG %A Regnier FE %A Tempst, P %A Carr, SA %X

There is an increasing need in biology and clinical medicine to robustly and reliably measure tens to hundreds of peptides and proteins in clinical and biological samples with high sensitivity, specificity, reproducibility, and repeatability. Previously, we demonstrated that LC-MRM-MS with isotope dilution has suitable performance for quantitative measurements of small numbers of relatively abundant proteins in human plasma and that the resulting assays can be transferred across laboratories while maintaining high reproducibility and quantitative precision. Here, we significantly extend that earlier work, demonstrating that 11 laboratories using 14 LC-MS systems can develop, determine analytical figures of merit, and apply highly multiplexed MRM-MS assays targeting 125 peptides derived from 27 cancer-relevant proteins and seven control proteins to precisely and reproducibly measure the analytes in human plasma. To ensure consistent generation of high quality data, we incorporated a system suitability protocol (SSP) into our experimental design. The SSP enabled real-time monitoring of LC-MRM-MS performance during assay development and implementation, facilitating early detection and correction of chromatographic and instrumental problems. Low to subnanogram/ml sensitivity for proteins in plasma was achieved by one-step immunoaffinity depletion of 14 abundant plasma proteins prior to analysis. Median intra- and interlaboratory reproducibility was <20%, sufficient for most biological studies and candidate protein biomarker verification. Digestion recovery of peptides was assessed and quantitative accuracy improved using heavy-isotope-labeled versions of the proteins as internal standards. Using the highly multiplexed assay, participating laboratories were able to precisely and reproducibly determine the levels of a series of analytes in blinded samples used to simulate an interlaboratory clinical study of patient samples. Our study further establishes that LC-MRM-MS using stable isotope dilution, with appropriate attention to analytical validation and appropriate quality control measures, enables sensitive, specific, reproducible, and quantitative measurements of proteins and peptides in complex biological matrices such as plasma.

%B Molecular Cell Proteomics %V 14 %P 2357-74 %8 09/2015 %G eng %N 9 %R 10.1074/mcp.M114.047050 %0 Journal Article %J Statistical Analysis and Data Mining %D 2014 %T Big data, big results: Knowledge discovery in output from large-scale analytics %A A. F. Karr %A R. Ferrell %A T. H. McCormick %A P. B. Ryan %B Statistical Analysis and Data Mining %V 7 %P 404-412 %8 09/2014 %G eng %N 5 %R 10.1002/sam.11237 %0 Journal Article %J Journal of Computational and Graphical Statistics %D 2014 %T The generalized multiset sampler %A H. J. Kim %A S. N. MacEachern %B Journal of Computational and Graphical Statistics %8 10/2014 %G eng %U http://dx.doi.org/10.1080/10618600.2014.962701 %R 10.1080/10618600.2014.962701 %0 Journal Article %J Statistical Journal of the IAOS %D 2014 %T Improving the Synthetic Longitudinal Business Database %A S. K. Kinney %A J. P. Reiter %A J. Miranda %B Statistical Journal of the IAOS %V 30 %P 129-135 %G eng %N 2 %0 Journal Article %J Statistical Journal of the International Association for Official Statistics %D 2014 %T SynLBD 2.0: Improving the Synthetic Longitudinal Business Database %A S. K. Kinney %A J. P. Reiter %A J. Miranda %B Statistical Journal of the International Association for Official Statistics %V 30 %P 129-135 %G eng %0 Conference Paper %B JSM Proceedings, Section on Survey Research Methods 2013 %D 2013 %T Construction of replicate weights for Project TALENT %A A. F. Karr %A Z. He %A M. P. Cohen %A D. Battle %A D. L. Achorn %A A. D. McKay %B JSM Proceedings, Section on Survey Research Methods 2013 %G eng %0 Journal Article %J Molecular and Cellular Proteomics %D 2013 %T Design, Implementation and Multisite Evaluation of a System Suitability Protocol for the Quantitative Assessment of Instrument Performance in Liquid Chromatography-Multiple Reaction Monitoring-MS (LC-MRM-MS) %A Abbatiello, S. %A Feng, X. %A Sedransk, N. %A Mani, DR %A Schilling, B %A Maclean, B %A Zimmerman, LJ %A Cusack, MP %A Hall, SC %A Addona, T %A Allen, S %A Dodder, NG %A Ghosh, M %A Held, JM %A Hedrick, V %A Inerowicz, HD %A Jackson, A %A Keshishian, H %A Kim, JW %A Lyssand, JS %A Riley, CP %A Rudnick, P %A Sadowski, P %A Shaddox, K %A Smith, D %A Tomazela, D %A Wahlander, A %A Waldemarson, S %A Whitwell, CA %A You, J %A Zhang, S %A Kinsinger, CR %A Mesri, M %A Rodriguez, H %A Borchers, CH %A Buck, C %A Fisher, SJ %A Gibson, BW %A Liebler, D %A Maccoss, M %A Neubert, TA %A Paulovich, A %A Regnier, F %A Skates, SJ %A Tempst, P %A Wang, M %A Carr, SA %X

Multiple reaction monitoring (MRM) mass spectrometry coupled with stable isotope dilution (SID) and liquid chromatography (LC) is increasingly used in biological and clinical studies for precise and reproducible quantification of peptides and proteins in complex sample matrices. Robust LC-SID-MRM-MS-based assays that can be replicated across laboratories and ultimately in clinical laboratory settings require standardized protocols to demonstrate that the analysis platforms are performing adequately. We developed a system suitability protocol (SSP), which employs a predigested mixture of six proteins, to facilitate performance evaluation of LC-SID-MRM-MS instrument platforms, configured with nanoflow-LC systems interfaced to triple quadrupole mass spectrometers. The SSP was designed for use with low multiplex analyses as well as high multiplex approaches when software-driven scheduling of data acquisition is required. Performance was assessed by monitoring of a range of chromatographic and mass spectrometric metrics including peak width, chromatographic resolution, peak capacity, and the variability in peak area and analyte retention time (RT) stability. The SSP, which was evaluated in 11 laboratories on a total of 15 different instruments, enabled early diagnoses of LC and MS anomalies that indicated suboptimal LC-MRM-MS performance. The observed range in variation of each of the metrics scrutinized serves to define the criteria for optimized LC-SID-MRM-MS platforms for routine use, with pass/fail criteria for system suitability performance measures defined as peak area coefficient of variation <0.15, peak width coefficient of variation <0.15, standard deviation of RT <0.15 min (9 s), and the RT drift <0.5min (30 s). The deleterious effect of a marginally performing LC-SID-MRM-MS system on the limit of quantification (LOQ) in targeted quantitative assays illustrates the use and need for a SSP to establish robust and reliable system performance. Use of a SSP helps to ensure that analyte quantification measurements can be replicated with good precision within and across multiple laboratories and should facilitate more widespread use of MRM-MS technology by the basic biomedical and clinical laboratory research communities.

%B Molecular and Cellular Proteomics %V 12 %P 2623-2639 %G eng %R 10.1074/mcp.M112.027078 %0 Journal Article %J Cheminformatics %D 2012 %T ChemModLab: A web-based cheminromates modeling laboratory %A Hughes-Oliver JM %A Brooks A %A Welch W %A Khaldei MG %A Hawkins DM %A Young SS %A Patil K %A Howell GW %A Ng RT %A Chu MT %X

ChemModLab, written by the ECCR @ NCSU consortium under NIH support, is a toolbox for fitting and assessing quantitative structure-activity relationships (QSARs). Its elements are: a cheminformatic front end used to supply molecular descriptors for use in modeling; a set of methods for fitting models; and methods for validating the resulting model. Compounds may be input as structures from which standard descriptors will be calculated using the freely available cheminformatic front end PowerMV; PowerMV also supports compound visualization. In addition, the user can directly input their own choices of descriptors, so the capability for comparing descriptors is effectively unlimited. The statistical methodologies comprise a comprehensive collection of approaches whose validity and utility have been accepted by experts in the fields. As far as possible, these tools are implemented in open-source software linked into the flexible R platform, giving the user the capability of applying many different QSAR modeling methods in a seamless way. As promising new QSAR methodologies emerge from the statistical and data-mining communities, they will be incorporated in the laboratory. The web site also incorporates links to public-domain data sets that can be used as test cases for proposed new modeling methods. The capabilities of ChemModLab are illustrated using a variety of biological responses, with different modeling methodologies being applied to each. These show clear differences in quality of the fitted QSAR model, and in computational requirements. The laboratory is web-based, and use is free. Researchers with new assay data, a new descriptor set, or a new modeling method may readily build QSAR models and benchmark their results against other findings. Users may also examine the diversity of the molecules identified by a QSAR model. Moreover, users have the choice of placing their data sets in a public area to facilitate communication with other researchers; or can keep them hidden to preserve confidentiality.

%B Cheminformatics %V 11 %P 61-81 %G eng %R 10.3233/CI-2008-0016 %0 Journal Article %J Statistical Science %D 2011 %T Make research data public? - Not always so simple: A Dialogue for statisticians and science editors %A Nell Sedransk %A Lawrence H. Cox %A Deborah Nolan %A Keith Soper %A Cliff Spiegelman %A Linda J. Young %A Katrina L. Kelner %A Robert A. Moffitt %A Ani Thakar %A Jordan Raddick %A Edward J. Ungvarsky %A Richard W. Carlson %A Rolf Apweiler %X

Putting data into the public domain is not the same thing as making those data accessible for intelligent analysis. A distinguished group of editors and experts who were already engaged in one way or another with the issues inherent in making research data public came together with statisticians to initiate a dialogue about policies and practicalities of requiring published research to be accompanied by publication of the research data. This dialogue carried beyond the broad issues of the advisability, the intellectual integrity, the scientific exigencies to the relevance of these issues to statistics as a discipline and the relevance of statistics, from inference to modeling to data exploration, to science and social science policies on these issues.

%B Statistical Science %V 5 %P 41-50 %G eng %R 10.1214/10-STS320 %0 Journal Article %J PACE %D 2011 %T Systematic decrements in QTc between the first and second day of contiguous daily ECG recordings under controlled conditions %A Beasley CM Jr %A Benson C %A Xia JQ %A Young SS %A Haber H %A Mitchell MI %A Loghin C %K ECG %K QT interval %X

BACKGROUND: Many thorough QT (TQT) studies use a baseline day and double delta analysis to account for potential diurnal variation in QTc. However, little is known about systematic changes in the QTc across contiguous days when normal volunteers are brought into a controlled inpatient environment.

%B PACE %V 34 %P 1116-1127 %8 April %G eng %R doi:10.1111/j.1540-8159.2011.03117.x %0 Journal Article %J International Statistical Review %D 2011 %T Toward Unrestricted Public Use Business Microdata: The Synthetic Longitudinal Business Database %A S. K. Kinney %A J. P. Reiter %A AP Reznek %A J Miranda %A R Jarmin %A JM Abowd %B International Statistical Review %V 79 %P 362-384 %G eng %N 3 %0 Journal Article %J Journal of Clinical Chemistry %D 2010 %T Analytical Validation of Proteomic-Based Multiplex Assays: A Workshop Report by the NCI-FDA Interagency Oncology Task Force on Molecular Diagnostics %A Stephan A. Carr %A Nell Sedransk. %A Henry Rodriguez %A Zivana Tezak %A Mehdi Mesri %A Daniel C. Liebler %A Susan J. Fisher %A Paul Tempst %A Tara Hiltke %A Larry G. Kessler %A Christopher R. Kinsinger %A Reena Philip %A David F. Ransohoff %A Steven J. Skates %A Fred E. Regnier %A N. Leigh Anderson %A Elizabeth Mansfield %A on behalf of the Workshop Participants %X

Clinical proteomics has the potential to enable the early detection of cancer through the development of multiplex assays that can inform clinical decisions. However, there has been some uncertainty among translational researchers and developers as to the specific analytical measurement criteria needed to validate protein-based multiplex assays. To begin to address the causes of this uncertainty, a day-long workshop titled “Interagency Oncology Task Force Molecular Diagnostics Workshop” was held in which members of the proteomics and regulatory communities discussed many of the analytical evaluation issues that the field should address in development of protein-based multiplex assays for clinical use. This meeting report explores the issues raised at the workshop and details the recommendations that came out of the day’s discussions, such as a workshop summary discussing the analytical evaluation issues that specific proteomic technologies should address when seeking US Food and Drug Administration approval.

%B Journal of Clinical Chemistry %V 56 %P 237-243 %G eng %R 10.1373/clinchem.2009.136416 %0 Book Section %D 2008 %T Citizen access to government statistical information %A Alan F. Karr %E H. Chen %E L. Brandt %E V. Gregg %E R. Traunmüller %E S. Dawes %E E. Hovy %E A. Macintosh %E C. A. Larson %X

Modern electronic technologies have dramatically increased the volume of information collected and assembled by government agencies at all levels. This chapter describes digital government research aimed at keeping government data warehouses from turning into data cemeteries. The products of the research exploit modern electronic technologies in order to allow “ordinary citizens” and researchers access to government-assembled information. The goal is to help ensure that more data also means better and more useful data. Underlying the chapter are three tensions. The first is between comprehensiveness and understandability of information available to non-technically oriented “private citizens.” The second is between ensuring usefulness of detailed statistical information and protecting confidentiality of data subjects. The third tension is between the need to analyze “global” data sets and the reality that government data are distributed among both levels of government and agencies (typically, by the “domain” of data, such as education, health, or transportation).

%I Springer US %P 503-529 %G eng %& 25 %0 Thesis %D 2003 %T Bayesian Stochastic Computation with application to Model Selection and Inverse Problems %A G. Molina %I Duke University %C Durham %G eng %9 masters %0 Journal Article %J Journal of Chemistry Information and Computer Sciences %D 2003 %T Design of diverse and focused combinatorial libraries using an alternating algorithm %A Young SS %A Wang M %A Gu F %B Journal of Chemistry Information and Computer Sciences %V 43 %P 1916-1921 %G eng %0 Book Section %D 2002 %T Advances in Digital Government %A A. F. Karr %A J. Lee %A A. P. Sanil %A J. Hernandez %A S. Karimi %A K. Litwin %E E. Elmagarmid %E W. M. McIver %X

The Internet provides an efficient mechanism for Federal agencies to distribute their data to the public. However, it is imperative that such data servers have built-in mechanisms to ensure that confidentiality of the data, and the privacy of individuals or establishments represented in the data, are not violated. We describe a prototype dissemination system developed for the National Agricultural Statistics Service that uses aggregation of adjacent geographical units as a confidentiality-preserving technique. We also outline a Bayesian approach to statistical analysis of the aggregated data.

%I Kluwer %C Boston %P 181-196 %@ 978-1-4020-7067-9 %G eng %& Web-based systems that disseminate information from data but preserve confidentiality %R 10.1007/0-306-47374-7_11 %0 Journal Article %J Transportation Research Record C %D 2002 %T Variability of travel times on arterial streets: effects of signals and volume %A A. F. Karr %A T.L. Graves %A A. Mockus %A P. Schuster %B Transportation Research Record C %V 10 %P 000-000 %G eng %0 Journal Article %J INTERACTIONS %D 2002 %T Visualizing Software Changes %A Stephen G. Eick %A Paul Schuster %A Audris Mockus %A Todd L. Graves %A Alan F. Karr %B INTERACTIONS %V 17 %P 29–31 %G eng %0 Conference Paper %B In IEEE Transactions on Software Engineering %D 2001 %T Does code decay? Assessing the evidence from change management data %A Stephen G. Eick %A Todd L. Graves %A Alan F. Karr %A J. S. Marron %A Audris Mockus %X

A central feature of the evolution of large software systems is that changeÐwhich is necessary to add new functionality, accommodate new hardware, and repair faultsÐbecomes increasingly difficult over time. In this paper, we approach this phenomenon, which we term code decay, scientifically and statistically. We define code decay and propose a number of measurements (code decay indices) on software and on the organizations that produce it, that serve as symptoms, risk factors, and predictors of decay. Using an unusually rich data set (the fifteen-plus year change history of the millions of lines of software for a telephone switching system), we find mixed, but on the whole persuasive, statistical evidence of code decay, which is corroborated by developers of the code. Suggestive indications that perfective maintenance can retard code decay are also discussed. Index TermsÐSoftware maintenance, metrics, statistical analysis, fault potential, span of changes, effort modeling.

%B In IEEE Transactions on Software Engineering %P 1–12 %G eng %0 Journal Article %J IEEE Transportation Software Engineering %D 2000 %T Predicting fault incidence using software change history %A A. F. Karr %A S. G. Eick %A T.L. Graves %A J. S. Marron %A H. Siy %K aging %K change history %K degradation %K management of change %K software fault tolerance %K software maintenance %X

This paper is an attempt to understand the processes by which software ages. We define code to be aged or decayed if its structure makes it unnecessarily difficult to understand or change and we measure the extent of decay by counting the number of faults in code in a period of time. Using change management data from a very large, long-lived software system, we explore the extent to which measurements from the change history are successful in predicting the distribution over modules of these incidences of faults. In general, process measures based on the change history are more useful in predicting fault rates than product metrics of the code: For instance, the number of times code has been changed is a better indication of how many faults it will contain than is its length. We also compare the fault rates of code of various ages, finding that if a module is, on the average, a year older than an otherwise similar module, the older module will have roughly a third fewer faults. Our most successful model measures the fault potential of a module as the sum of contributions from all of the times the module has been changed, with large, recent changes receiving the most weight

%B IEEE Transportation Software Engineering %V 26 %P 653?661 %G eng %R 10.1109/32.859533 %0 Journal Article %J In Papers in Regional Science %D 1999 %T Estimation of Demand due to Welfare Reform %A Sen, Ashish %A P. Metaxatos %A Sööt, Siim %A Piyushimita Thakuriah %B In Papers in Regional Science %V 78 %P 195 – 211 %G eng %0 Journal Article %J Papers in Regional Science %D 1999 %T Welfare reform and spatial matching between clients and jobs %A Sen, Ashish %A Metaxatos, Paul %A Sööt, Siim %A Thakuriah, Vonu %K C13 %K C51 %K C52 %K entry-level job openings. %K I31 %K J23 %K JEL classification:C12 %K Key words:Welfare to work %K R12 %K R41 %K R53 %K targeted service %K travel demand %X

The recent Welfare Reform Act requires several categories of public assistance recipients to transition to the work force. In most metropolitan areas public assistance clients reside great distances from areas of entry-level jobs. Any program designed to provide access to these jobs, for those previously on public aid, needs relevant transportation services when the job search process begins. Therefore it is essential that the latent demand for commuting among public aid clients be assessed in developing public transportation services. The location of entry-level jobs must also be known or, as in this article, estimated using numerous data sources. This article reports on such a demand estimation effort, focusing primarily on the use of Regional Science methods.

%B Papers in Regional Science %I Springer-Verlag %V 78 %P 195-211 %G eng %U http://dx.doi.org/10.1007/s101100050021 %R 10.1007/s101100050021 %0 Book Section %D 1998 %T Good Statistical Practice %A Alan Karr %E C. E. Minder %E F. Friedl %I Austrian Statistical Society %P 175?179 %G eng %& Modeling software changes %0 Conference Paper %B Software Metrics Symposium, 1998. Metrics 1998. Proceedings. Fifth International %D 1998 %T Inferring change effort from configuration management databases %A T.L. Graves %A A. Mockus %X

In this paper we describe a methodology and algorithm for historical analysis of the effort necessary for developers to make changes to software. The algorithm identifies factors which have historically increased the difficulty of changes. This methodology has implications for research into cost drivers. As an example of a research finding, we find that a system under study was “decaying” in that changes grew more difficult to implement at a rate of 20% per year. We also quantify the difference in costs between changes that fix faults and additions of new functionality: fixes require 80% more effort after accounting for size. Since our methodology adds no overhead to the development process, we also envision it being used as a project management tool: for example, developers can identify code modules which have grown more difficult to change than previously, and can match changes to developers with appropriate expertise. The methodology uses data from a change management system, supported by monthly time sheet data if available. The method’s performance does not degrade much when the quality of the time sheet data is limited. We validate our results using a survey of the developers under study: the change efforts resulting from the algorithm match the developers’ opinions. Our methodology includes a technique based on the jackknife to determine factors that contribute significantly to change effort

%B Software Metrics Symposium, 1998. Metrics 1998. Proceedings. Fifth International %P 267-273 %8 Nov %G eng %R 10.1109/METRIC.1998.731253 %0 Book Section %D 1998 %T SoftStat ?97: Advances in Statistical Software 6 %A A. F. Karr %A G. Eick %A A. Mockus %A T.L. Graves %E W. Badilla %E F. Faulbaum %I Lucius & Lucius %P 3-10 %G eng %& Web-based text visualization %0 Journal Article %J In Transportation Research Record %D 1998 %T Transportation Planning Process for Linking Welfare Recipients to Jobs %A Metaxatos, Paul %A Sööt, Siim %A Piyushimita Thakuriah %A Sen, Ashish %B In Transportation Research Record %V 1626 %P 149 - 158 %G eng %0 Journal Article %J World Wide Web %D 1998 %T A Web laboratory for software data analysis %A G. Eick %A A. Mockus %A T.L. Graves %A A. F. Karr %X

We describe two prototypical elements of a World Wide Web?based system for visualization and analysis of data produced in the software development process. Our system incorporates interactive applets and visualization techniques into Web pages. A particularly powerful example of such an applet, SeeSoftTM, can display thousands of lines of text on a single screen, allowing detection of patterns not discernible directly from the text. In our system, Live Documents replace static statistical tables in ordinary documents by dynamic Web?based documents, in effect allowing the ?reader? to customize the document as it is read. Use of the Web provides several advantages. The tools access data from a very large central data base, instead of requiring that it be downloaded; this ensures that readers are always working with the most up?to?date version of the data, and relieves readers of the responsibility of preparing data for their use. The tools encourage collaborative research, as one researcher’s observations can easily be replicated and studied in greater detail by other team members. We have found this particularly useful while studying software data as part of a team that includes researchers in computer science, software engineering, and statistics, as well as development managers. Live documents will also help the Web revolutionize scientific publication, as papers published on the Web can contain Java applets that permit readers to confirm the conclusions reached by the authors’ statistical analyses.

%B World Wide Web %V 1 %P 55-60 %G eng %R 10.1023/A:1019299211575 %0 Book Section %B Case Studies in Bayesian Statistics %D 1997 %T A Random-Effects Multinomial Probit Model of Car Ownership Choice %A Nobile, Agostino %A Bhat, Chandra R. %A Pas, Eric I. %E Gatsonis, Constantine %E Hodges, JamesS. %E Kass, RobertE. %E McCulloch, Robert %E Rossi, Peter %E Singpurwalla, NozerD. %K car ownership %K longitudinal data %K Multinomial probit model %X

The number of cars in a household has an important effect on its travel behavior (e.g., choice of number of trips, mode to work and non-work destinations), hence car ownership modeling is an essential component of any travel demand forecasting effort. In this paper we report on a random effects multinomial probit model of car ownership level, estimated using longitudinal data collected in the Netherlands. A Bayesian approach is taken and the model is estimated by means of a modification of the Gibbs sampling with data augmentation algorithm considered by McCulloch and Rossi (1994). The modification consists in performing, after each Gibbs sampling cycle, a Metropolis step along a direction of constant likelihood. An examination of the simulation output illustrates the improved performance of the resulting sampler.

%B Case Studies in Bayesian Statistics %S Lecture Notes in Statistics %I Springer New York %V 121 %P 419-434 %@ 978-0-387-94990-1 %G eng %U http://dx.doi.org/10.1007/978-1-4612-2290-3_13 %R 10.1007/978-1-4612-2290-3_13 %0 Journal Article %J Environmental Health Perspectives %D 1995 %T Effect of outdoor airborne particulate matter on daily death count %A P. Styer %A McMillan, N %A Gao, F %A Davis, J %A Jerome Sacks %X

To investigate the possible relationship between airborne particulate matter and mortality, we developed regression models of daily mortality counts using meteorological covariates and measures of outdoor PM10. Our analyses included data from Cook County, Illinois, and Salt Lake County, Utah. We found no evidence that particulate matter < or = 10 microns (PM10) contributes to excess mortality in Salt Lake County, Utah. In Cook County, Illinois, we found evidence of a positive PM10 effect in spring and autumn, but not in winter and summer. We conclude that the reported effects of particulates on mortality are unconfirmed.

%B Environmental Health Perspectives %V 103 %P 490–497 %G eng %0 Journal Article %D 1994 %T Multiworker Household Travel Demand %A Sööt, Siim %A Sen, Ashish %A Marston, J. %A Piyushimita Thakuriah %K Automobile ownership %K Demographics %K Employed %K Highway travel %K Households %K Income %K New products %K Population density %K Travel behavior %K Travel surveys %K Trip generation %K Urban areas %K Vehicle miles of travel %X The purpose of this study is to examine the travel behavior and related characteristics of multiworker households (MWHs) (defined as households with at least two workers) and how they contribute to the ever-increasing demand for transportation services. On average they have incomes which exceed the national household average and often have multiple automobiles and as households they generate a considerable number of trips. The virtual dearth of previous studies of MWHs makes an overview of their characteristics and their travel behavior necessary. This study reveals that the number of MWHs has continued to grow, as has their use of highways; they are found in disproportionate numbers in low density urban areas distant from public transportation. They also have new vehicles, and drive each vehicle more miles than other households. As households, MWHs travel more than do other households. However, an individual worker’s ability and desire to travel is constrained by time factors, among others, and transportation use by MWHs, when calculated on a per worker basis, is relatively low. %I Federal Highway Administration %V 1 %P 30 p %G eng %U http://nhts.ornl.gov/1990/doc/demographic.pdf