%0 Book Section %B Terrorism Informatics %D 2008 %T Homeland Insecurity %A Stephen E. Fienberg %E Chen, Hsinchun %E Reid, Edna %E Sinai, Joshua %E Silke, Andrew %E Ganor, Boaz %X

Following the events of September 11, 2001, there has been heightened attention in the United States and elsewhere to the use of multiple government and private databases for the identification of possible perpetrators of future attacks, as well as an unprecedented expansion of federal government data mining activities, many involving databases containing personal information. There have also been claims that prospective datamining could be used to find the “signature” of terrorist cells embedded in larger networks. We present an overview of why the public has concerns about such activities and describe some proposals for the search of multiple databases which supposedly do not compromise possible pledges of confidentiality to the individuals whose data are included. We also explore their link to the related literatures on privacy-preserving data mining. In particular, we focus on the matching problem across databases and the concept of “selective revelation” and their confidentiality implications.

%B Terrorism Informatics %S Integrated Series In Information Systems %I Springer US %V 18 %P 197-218 %@ 978-0-387-71612-1 %G eng %U http://dx.doi.org/10.1007/978-0-387-71613-8_10 %R 10.1007/978-0-387-71613-8_10 %0 Book Section %B Web Dynamics %D 2004 %T How Large Is the World Wide Web? %A Adrian Dobra %A Stephen E. Fienberg %X

There are many metrics one could consider for estimating the size of the World Wide Web, and in the present chapter we focus on size in terms of the number N of Web pages. Since a database with all the valid URLs on the Web cannot be constructed and maintained, determining N by counting is impossible. For the same reasons, estimating N by directly sampling from the Web is also infeasible. Instead of studying the Web as a whole, one can try to assess the size of the publicly indexable Web, which is the part of the Web that is considered for indexing by the major search engines. Several groups of researchers have invested considerable efforts to develop sound sampling schemes that involve submitting a number of queries to several major search engines. Lawrence and Giles [8] developed a procedure for sampling Web documents by submitting various queries to a number of search engines. We contrast their study with the one performed by Bharat and Broder [2] in November 1997. Although both experiments took place almost in the same period of time, their estimates are significantly different. In this chapter we review how the size of the indexable Web was estimated by three groups of researchers using three different statistical models: Lawrence and Giles 18, 9], Bharat and Broder [2] and Bradlow and Schmittlein 13]. Then we present a statistical framework for the analysis of data sets collected by query-based sampling, utilizing a hierarchical Bayes formulation of the Rasch model for multiple list population estimation developed in 16]. We explain why this approach seems to be in reasonable accord with the real-world constraints and thus allows us to make credible inferences about the size of the Web. We give two different methods that lead to credible estimates of the size of the Web in a reasonable amount of time and are also consistent with the real-world constraints.

%B Web Dynamics %I Springer Berlin Heidelberg %P 23-43 %@ 978-3-642-07377-9 %G eng %U http://dx.doi.org/10.1007/978-3-662-10874-1_2 %R 10.1007/978-3-662-10874-1_2 %0 Journal Article %D 2001 %T A Hybrid High-Order Markov Chain Model for Computer Intrusion Detection %A Ju, W-H %A Yehuda Vardi %X

A hybrid model based mostly on a high-order Markov chain and occasionally on a statistical-independence model is proposed for profiling command sequences of a computer user in order to identify a "signature behavior" for that user. Based on the model, an estimation procedure for such a signature behavior driven by maximum likelihood (ML) considerations is devised. The formal ML estimates are numerically intractable, but the ML-optimization problem can be substituted by a linear inverse problem with positivity constraint (LININPOS), for which the EM algorithm can be used as an equation solver to produce an approximate ML-estimate. The intrusion detection system works by comparing a user’s command sequence to the user’s and others’ estimated signature behaviors in real time through statistical hypothesis testing. A form of likelihood-ratio test is used to detect if a given sequence of commands is from the proclaimed user, with the alternative hypothesis being a masquerader user. Applying the model to real-life data collected from AT&T Labs-Research indicates that the new methodology holds some promise for intrusion detection.

%V 10 %P 277-295 %G eng %0 Journal Article %J Statistics and Computing %D 1998 %T A hybrid Markov chain for the Bayesian analysis of the multinomial probit model %A Nobile, Agostino %K Bayesian analysis %K Gibbs sampling %K Metropolis algorithm %K Multinomial probit model %X

Bayesian inference for the multinomial probit model, using the Gibbs sampler with data augmentation, has been recently considered by some authors. The present paper introduces a modification of the sampling technique, by defining a hybrid Markov chain in which, after each Gibbs sampling cycle, a Metropolis step is carried out along a direction of constant likelihood. Examples with simulated data sets motivate and illustrate the new technique. A proof of the ergodicity of the hybrid Markov chain is also given.

%B Statistics and Computing %I Kluwer Academic Publishers %V 8 %P 229-242 %G eng %U http://dx.doi.org/10.1023/A%3A1008905311214 %R 10.1023/A:1008905311214 %0 Journal Article %J Journal of Statistical Planning and Inference %D 1995 %T On high level exceedance modeling and tail inference %A M. R. Leadbetter %K Central limit theory %K Exceedance modeling %K Extreme values %K Tail estimation %X

This paper discusses a general framework common to some varied known and new results involving measures of threshold exceedance by high values of stationary stochastic sequences. In particular these concern the following. (a) Probabilistic modeling of infrequent but potentially damaging physical events such as storms, high stresses, high pollution episodes, describing both repeated occurrences and associated ‘damage’ magnitudes. (b) Statistical estimation of ‘tail parameters’ of a stationary stochastic sequence {Xj}. This includes a variety of estimation problems and in particular cases such as estimation of expected lengths of clusters of high values (e.g. storm durations), of interest in (a). ‘Very high’ values (leading to Poisson-based limits for exceedance statistics) and ‘high’ values (giving normal limits) are considered and exhibited as special cases within the general framework of central limit results for ‘random additive interval functions’. The case of array sums of dependent random variables is revisited within this framework, clarifying the role of dependence conditions and providing minimal conditions for characterization of possible limit types. The methods are illustrated by the construction of confidence limits for the mean of an ‘exceedance statistic’ measuring high ozone levels, based on Philadelphia monitoring data.

%B Journal of Statistical Planning and Inference %V 45 %P 247-280 %G eng %R 10.1016/0378-3758(94)00075-1