4 edition of The generalized jackknife statistic found in the catalog.
Bibliography: p. 303-306.
|Statement||[by] H. L. Gray and W. R. Schucany.|
|Series||Statistics: textbooks and monographs, v. 1|
|Contributions||Schucany, W. R., joint author.|
|LC Classifications||QA276.8 .G7|
|The Physical Object|
|Pagination||x, 308 p.|
|Number of Pages||308|
|LC Control Number||75179385|
Cross-validation, sometimes called rotation estimation or out-of-sample testing, is any of various similar model validation techniques for assessing how the results of a statistical analysis will generalize to an independent data set. It is mainly used in settings where the goal is prediction, and one wants to estimate how accurately a predictive model will perform in practice. The Jackknife method has the advantage to be more stable, easy to code, easy to understand (no need to know matrix algebra), and easy to interpret (meaningful coefficients). Jackknife is not the first regression approximation developed by the author: check my book pages for other examples. 1. Model Comparison.
The jackknife has been proposed by Quenouille in mid ’s. In fact, the jackknife predates the bootstrap. The jackknife (with m = n−1) is less computer-intensive than the bootstrap. Jackknife describes a swiss penknife, easy to carry around. By analogy, Tukey () coined the term in statistics as a general. methods comes in. This section discusses jackknife and the next section will discuss bootstrap. We deﬁne the jackknife averages, xJ i by xJ i ≡ 1 N −1 X j6= i xj, (14) so xJ i is the average of all the x-values except xi. Similarly we deﬁne fJ i ≡ f(xJ i). (15) We now state that the jackknife estimate of f(X) is the average of the fJ.
The Jackknife tends to perform better for confidence interval estimation for pairwise agreement measures. Bootstrapping performs better for skewed distributions. The Jackknife is more suitable for small original data samples. References. Efron, B. (), "The Jackknife, the Bootstrap, and Other Resampling Plans," SIAM, monograph #38, CBMS-NSF. Bootstrap and Jackknife Estimation of Sampling Distributions 1 A General view of the bootstrap We begin with a general approach to bootstrap methods. The goal is to formulate the ideas in a context which is free of particular model assumptions. Suppose that the data X˘P 2P= fP: 2 g. The parameter space is allowed to be.
Letters from Vietnam
Love crownes the end
na coelestia =
The Best of Photojournalism
directory of gay, lesbian, and bisexual community publications in the United States and Canada
What content-area teachers should know about adolescent literacy
teacher as a professional in a developing country
Letters from a farmer in Pennsylvania, to the inhabitants of the British colonies
Third Conference on Industrial Carbons and Graphite
Deportation of certain undesirable aliens and denial of readmission to those deported.
Atherosclerosis and Cardiovascular Diseases
Field of Family Therapy
Tensions and school achievement examinations ...
The generalized jackknife statistic. [Herny L Gray; R R Schucany] Home. WorldCat Home About WorldCat Help. Search. Search for Library Items Search for Lists Search for Book: All Authors / Contributors: Herny L Gray; R R Schucany. Find more information about: ISBN: OCLC Number.
Material Type: Book: Language: English: Title: The generalized jackknife statistic Statistics textbooks and monographs: Author(S) H. Gray (Author) W. Schucany.
was inspired by the previous success of the Jackknife procedure.1 Imagine that a sample of nindependent, identically distributed observations from an unknown distribution have been gathered, and a mean of the sample, Y, has been Size: KB.
Jackknife and bootstrap estimates of these quantities are introduced along with some heuristic justifications. Theory and Methods of Statistics covers essential topics for advanced graduate students and professional research statisticians.
This comprehensive resource covers many important areas in one manageable volume, including core. The flexibility of the definition of the first-order generalized jackknife is exploited so that its relation to the method of statistical differentials can be seen.
The estimators presented have the same bias reduction and asymptotic distributional properties as the usual generalized jackknife. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models.
The book is aimed at students, researchers and other practitioners who are interested in using mixed models for statistical data : Springer-Verlag New York. "This books breaks away form more theoretically burdensome texts, focusing on providing a set of useful tools that help readers understand the theoretical under pinning of statistical methodology."--SciTech Book News, March "This (hardback) book is one of the most up-to-date and easily understood texts in the field of mathematical statistics.
fails for non-smooth statistics, such as the sample median. If µ^ n denotes the sample median in the univariate case, then in general, VarJ(^µn)=Var(µ^n). µ 1 2 ´2 2 2 in distribution, where ´2 2 denotes a chi-square random variable with 2 degrees of freedom (see Efronx).
So in this case, the jackknife method does not lead. General Properties of Distributions; I am sorry, but I have not included this topic as yet on the Real Statistics website. I expect to be adding Baysian statistics topics in the future. Charles. When I use the jack knife procedure as described i get the same 95% CI as the excel addon “Analyse-it” but the 95% CI from the commercial.
The jackknife is consistent for the sample means, sample variances, central and non-central t-statistics (with possibly non-normal populations), sample coefficient of variation, maximum likelihood estimators, least squares estimators, correlation coefficients and regression coefficients.
Additional Physical Format: Online version: Gray, Henry L. Generalized jackknife statistic. New York: M. Dekker, (OCoLC) Material Type. The jackknife only works well for linear statistics (e.g., mean). It fails to give accurate estimation for non-smooth (e.g., median) and nonlinear (e.g., correlation coefficient) cases.
Thus improvements to this technique were developed. Delete-d jackknife. In statistics, the jackknife is a resampling technique especially useful for variance and bias estimation.
The jackknife pre-dates other common resampling methods such as the bootstrap. The jackknife estimator of a parameter is found by systematically leaving out each observation from a dataset and calculating the estimate and then finding the average of these calculations.
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Robert Kissell, Jim Poserina, in Optimal Sports Math, Statistics, and Fantasy, Jackknife Sampling Techniques.
Jackknife sampling is another type of resampling technique that is used to estimate parameter values and corresponding standard deviations similar to bootstrapping.
The sampling method for the jackknife technique requires that the analyst omit a single observation in each. Generalized linear models (GLMs) extend linear regression to models with a non-Gaussian or even discrete response.
GLM theory is predicated on the exponential family of distributions—a class so rich that it includes the commonly used logit, probit, and Poisson models. The jackknife method estimates the standard error (and bias) of statistics without making any parametric assumptions about the population that generated the data.
It uses only the sample data. The jackknife method manufactures jackknife samples from the data. A jackknife sample is a "leave-one-out" resample of the data. The jackknife was developed by Quenouille (, ) as a general method to remove bias from estimators.
Tukey () noticed that the approach also led to a method for estimating variances. Since that time, the jackknife has been used more. General Overview Simon produced a book “Resampling: the New Statistics”, an example based book on Monte Carlo, Permutation (Randomization) tests, and Bootstrap available for free on the Resampling Stats website.
I found the following examples demonstrate the effectiveness of these methods. The test statistic for the jackknife test. Springer Texts in Statistics Alfred: Elements of Statistics for the Life and Social Sciences Berger: An Introduction to Probability and Stochastic Processes Bilodeau and Brenner: Theory of Multivariate Statistics Blom: Probability and Statistics: Theory and Applications Brockwell and Davis: An Introduction to Times Series and Forecasting Chow and Teicher: Probability Theory: Independence Missing: jackknife.
This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields.
The book offers a systematic approach to inference about non-Gaussian linear mixed models.Statistics >Resampling >Jackknife estimation 1. 2jackknife— Jackknife estimation Syntax jackknife exp list, optionseform option: command options Description Main eclass number of observations used is stored in e(N) rclass number of observations used is stored in r(N).Similarly, one can deﬂne a jackknife P-value for the hypothesis H0: µ = µ0 by comparing Z = p n ¡ ps(X) ¡ µ0 ¢ p Vps(X) = ps(X) ¡ µ0 p (1=n)Vps(X) (4) with a standard normal variable.
Remark: Technically speaking, the pseudovalues in (1) are for what is called the delete-one jackknife. There is also a more general delete-k or block.