Academic literature on the topic 'Massive sample size'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Massive sample size.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Massive sample size"

1

Mittal, S., D. Madigan, R. S. Burd, and M. A. Suchard. "High-dimensional, massive sample-size Cox proportional hazards regression for survival analysis." Biostatistics 15, no. 2 (2013): 207–21. http://dx.doi.org/10.1093/biostatistics/kxt043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Damjanov, Ivana. "The evolving structure of massive quiescent galaxies." Proceedings of the International Astronomical Union 8, S295 (2012): 101–4. http://dx.doi.org/10.1017/s1743921313004444.

Full text
Abstract:
AbstractThe evolution of size and shape of massive quiescent galaxies over cosmic history has been challenging to explain within standard models of galaxy assembly. Several mechanisms have been proposed to explain the size growth of these systems, including major mergers, expansion, and late accretion via a series of minor mergers. The central mass density is shown to be an excellent tool for discriminating between different evolutionary scenarios. We present here the analysis performed on a spectroscopic sample of ~500 quiescent systems with stellar masses M*>1010 M⊙ spanning the redshift range 0.2<z<2.7 for which we calculate stellar mass densities within central 1 kpc and show that this quantity evolves linearly with redshift. Our results do not change when only systems at constant number density are considered in order to account for the mass growth during mergers and to relate progenitors to their descendants. Discrepancy between our findings and other recent studies performed on an order of magnitude smaller samples emphasizes the need for larger homogeneous spectroscopic samples to be used in such analysis.
APA, Harvard, Vancouver, ISO, and other styles
3

Xiao, Dengyu, Yixiang Huang, Chengjin Qin, Zhiyu Liu, Yanming Li, and Chengliang Liu. "Transfer learning with convolutional neural networks for small sample size problem in machinery fault diagnosis." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 233, no. 14 (2019): 5131–43. http://dx.doi.org/10.1177/0954406219840381.

Full text
Abstract:
Data-driven machinery fault diagnosis has gained much attention from academic research and industry to guarantee the machinery reliability. Traditional fault diagnosis frameworks are commonly under a default assumption: the training and test samples share the similar distribution. However, it is nearly impossible in real industrial applications, where the operating condition always changes over time and the quantity of the same-distribution samples is often not sufficient to build a qualified diagnostic model. Therefore, transfer learning, which possesses the capacity to leverage the knowledge learnt from the massive source data to establish a diagnosis model for the similar but small target data, has shown potential value in machine fault diagnosis with small sample size. In this paper, we propose a novel fault diagnosis framework for the small amount of target data based on transfer learning, using a modified TrAdaBoost algorithm and convolutional neural networks. First, the massive source data with different distributions is added to the target data as the training data. Then, a convolutional neural network is selected as the base learner and the modified TrAdaBoost algorithm is employed for the weight update of each training sample to form a stronger diagnostic model. The whole proposition is experimentally demonstrated and discussed by carrying out the tests of six three-phase induction motors under different operating conditions and fault types. Results show that compared with other methods, the proposed framework can achieve the highest fault diagnostic accuracy with inadequate target data.
APA, Harvard, Vancouver, ISO, and other styles
4

dos Reis, Sandra N., Fernando Buitrago, Polychronis Papaderos, et al. "Structural analysis of massive galaxies using HST deep imaging at z < 0.5." Astronomy & Astrophysics 634 (January 28, 2020): A11. http://dx.doi.org/10.1051/0004-6361/201936276.

Full text
Abstract:
Context. The most massive galaxies (Mstellar ≥ 1011 M⊙) in the local Universe are characterized by a bulge-dominated morphology and old stellar populations, in addition to being confined to a tight mass-size relation. Identifying their main components can provide insights into their formation mechanisms and subsequent mass assembly. Aims. Taking advantage of Hubble Space Telescope (HST) CANDELS data, we analyze the lowest redshift (z &lt; 0.5) massive galaxies in the H and I band in order to disentangle their structural constituents and study possible faint non-axisymmetric features. Methods. Our final sample consists of 17 massive galaxies. Due to the excellent HST spatial resolution for intermediate redshift objects, they are hard to model by purely automatic parametric fitting algorithms. We performed careful single and double (bulge-disk decompositions) Sérsic fits to their galaxy surface brightness profiles. We compare the model color profiles with the observed ones and also derive multi-component global effective radii attempting to obtain a better interpretation of the mass-size relation. Additionally, we test the robustness of our measured structural parameters via simulations. Results. We find that the Sérsic index does not offer a good proxy for the visual morphological type for our sample of massive galaxies. Our derived multi-component effective radii give a better description of the size of our sample galaxies than those inferred from single Sérsic models with GALFIT. Our galaxy population lies on the scatter of the local mass-size relation, indicating that these massive galaxies have not experienced a significant growth in size since z ∼ 0.5. Interestingly, the few outliers are late-type galaxies, indicating that spheroids must reach the local mass-size relation earlier. For most of our sample galaxies, both single- and multi-component Sérsic models with GALFIT show substantial systematic deviations from the observed surface brightness profiles in the outskirts. These residuals may be partly due to several factors, namely a nonoptimal data reduction for low surface brightness features or the existence of prominent stellar haloes for massive galaxies, or they could also arise from conceptual shortcomings of parametric 2D image decomposition tools. They consequently propagate into galaxy color profiles. This is a significant obstacle to the exploration of the structural evolution of galaxies, which calls for a critical assessment and refinement of existing surface photometry techniques.
APA, Harvard, Vancouver, ISO, and other styles
5

Cheng, Cheng, Cong Kevin Xu, Lizhi Xie, et al. "UV and NIR size of the low-mass field galaxies: the UV compact galaxies." Astronomy & Astrophysics 633 (January 2020): A105. http://dx.doi.org/10.1051/0004-6361/201936186.

Full text
Abstract:
Context. Most of the massive star-forming galaxies are found to have “inside-out” stellar mass growth modes, which means the inner parts of the galaxies mainly consist of the older stellar population, while the star forming in the outskirt of the galaxy is still ongoing. Aims. The high-resolution HST images from Hubble Deep UV Legacy Survey and Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey projects with the unprecedented depth in both F275W and F160W bands are the perfect data sets to study the forming and formed stellar distribution directly. Methods. We selected the low redshift (0.05 &lt; zspec &lt; 0.3) galaxy sample from the GOODS-North field where the HST F275W and F160W images are available. Then we measured the half light radius in F275W and F160W bands, which are the indicators of the star formation and stellar mass. Results. By comparing the F275W and F160W half light radius, we find the massive galaxies are mainly follow the “inside-out” growth mode, which is consistent with the previous results. Moreover, the HST F275W and F160W images reveal that some of the low-mass galaxies (&lt; 108 M⊙) have the “outside-in” growth mode: their images show a compact UV morphology, implying an ongoing star formation in the galaxy centre, while the stars in the outskirts of the galaxies are already formed. The two modes transit smoothly at stellar mass range about 108 − 9 M⊙ with a large scatter. We also try to identify the possible neighbour massive galaxies from the SDSS data, which represent the massive galaxy sample. We find that all of the spec-z selected galaxies have no massive galaxy nearby. Thus the “outside-in” mode we find in the low-mass galaxies are not likely originated from the environment.
APA, Harvard, Vancouver, ISO, and other styles
6

van Driel, Wim, and Bert van den Broek. "Barred Extreme IRAS Galaxies." International Astronomical Union Colloquium 157 (1996): 256–58. http://dx.doi.org/10.1017/s0252921100049897.

Full text
Abstract:
AbstractWe studied a statistically complete sample of 57 southern socalled extreme IRAS galaxies, i.e., objects with a high far-infrared/blue luminosity ratio, LFIR/LB&gt;3, using optical (imaging and spectra), radio continuum, and CO(1–0) line observations. The sample can be divided into three distinct categories: dwarfs (20%), barred spirals (35%), and interacting systems (35%). The barred galaxies are generally morphologically undisturbed, isolated systems, with average star formation rates (4 M⊙ yr–1) and efficiencies (LFIR/MH2 = 16 L⊙/M⊙) for galaxies in our sample. An enhanced massive star formation rate is the cause of the infrared brightness in 93% of all galaxies in the sample. The nuclear region is the most important star formation locus, generally unresolved at 1" resolution, i.e., less than 0.2-0.6 kpc size (H0=75 km s–1 Mpc–1), though 2 kpc size in three cases. In about two-thirds of the extreme IRAS SB’s, fainter, diffuse (2.5-10 kpc size) massive star formation is seen in the bar as well.
APA, Harvard, Vancouver, ISO, and other styles
7

Sonnenfeld, Alessandro, Wenting Wang, and Neta Bahcall. "Hyper Suprime-Cam view of the CMASS galaxy sample." Astronomy & Astrophysics 622 (January 24, 2019): A30. http://dx.doi.org/10.1051/0004-6361/201834260.

Full text
Abstract:
Aims. We wish to determine the distribution of dark matter halo masses as a function of the stellar mass and the stellar mass profile for massive galaxies in the Baryon Oscillation Spectroscopic Survey (BOSS) constant-mass (CMASS) sample. Methods. We used grizy photometry from the Hyper Suprime-Cam (HSC) to obtain Sérsic fits and stellar masses of CMASS galaxies for which HSC weak-lensing data are available. This sample was visually selected to have spheroidal morphology. We applied a cut in stellar mass, log M*/M⊙ &gt; 11.0, and selected ∼10 000 objects thus. Using a Bayesian hierarchical inference method, we first investigated the distribution of Sérsic index and size as a function of stellar mass. Then, making use of shear measurements from HSC, we measured the distribution of halo mass as a function of stellar mass, size, and Sérsic index. Results. Our data reveal a steep stellar mass-size relation Re ∝ M*βR, with βR larger than unity, and a positive correlation between Sérsic index and stellar mass: n ∝ M*0.46. The halo mass scales approximately with the 1.7 power of the stellar mass. We do not find evidence for an additional dependence of halo mass on size or Sérsic index at fixed stellar mass. Conclusions. Our results disfavour galaxy evolution models that predict significant differences in the size growth efficiency of galaxies living in low- and high-mass halos.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Xiaoyan, and Wei Yang. "Size Dependence of Dislocation-Mediated Plasticity in Ni Single Crystals: Molecular Dynamics Simulations." Journal of Nanomaterials 2009 (2009): 1–10. http://dx.doi.org/10.1155/2009/245941.

Full text
Abstract:
We investigate the compressive yielding of Ni single crystals by performing atomistic simulations with the sample diameters in the range of 5 nm ∼ 40 nm. Remarkable effects of sample sizes on the yield strength are observed in the nanopillars with two different orientations. The deformation mechanisms are characterized by massive dislocation activities within a single slip system and a nanoscale deformation twining in an octal slip system. A dislocation dynamics-based model is proposed to interpret the size and temperature effects in single slip-oriented nanopillars by considering the nucleation of incipient dislocations.
APA, Harvard, Vancouver, ISO, and other styles
9

Krogager, J. K., A. W. Zirm, S. Toft, A. Man, and G. Brammer. "A SPECTROSCOPIC SAMPLE OF MASSIVE, QUIESCENTz∼ 2 GALAXIES: IMPLICATIONS FOR THE EVOLUTION OF THE MASS-SIZE RELATION." Astrophysical Journal 797, no. 1 (2014): 17. http://dx.doi.org/10.1088/0004-637x/797/1/17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Meyvis, Tom, and Stijn M. J. Van Osselaer. "Increasing the Power of Your Study by Increasing the Effect Size." Journal of Consumer Research 44, no. 5 (2017): 1157–73. http://dx.doi.org/10.1093/jcr/ucx110.

Full text
Abstract:
Abstract As in other social sciences, published findings in consumer research tend to overestimate the size of the effect being investigated, due to both file drawer effects and abuse of researcher degrees of freedom, including opportunistic analysis decisions. Given that most effect sizes are substantially smaller than would be apparent from published research, there has been a widespread call to increase power by increasing sample size. We propose that, aside from increasing sample size, researchers can also increase power by boosting the effect size. If done correctly, removing participants, using covariates, and optimizing experimental designs, stimuli, and measures can boost effect size without inflating researcher degrees of freedom. In fact, careful planning of studies and analyses to maximize effect size is essential to be able to study many psychologically interesting phenomena when massive sample sizes are not feasible.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Massive sample size"

1

Schintler, Laurie A., and Manfred M. Fischer. "The Analysis of Big Data on Cites and Regions - Some Computational and Statistical Challenges." WU Vienna University of Economics and Business, 2018. http://epub.wu.ac.at/6637/1/2018%2D10%2D28_Big_Data_on_cities_and_regions_untrack_changes.pdf.

Full text
Abstract:
Big Data on cities and regions bring new opportunities and challenges to data analysts and city planners. On the one side, they hold great promise to combine increasingly detailed data for each citizen with critical infrastructures to plan, govern and manage cities and regions, improve their sustainability, optimize processes and maximize the provision of public and private services. On the other side, the massive sample size and high-dimensionality of Big Data and their geo-temporal character introduce unique computational and statistical challenges. This chapter provides overviews on the salient characteristics of Big Data and how these features impact on paradigm change of data management and analysis, and also on the computing environment.<br>Series: Working Papers in Regional Science
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Massive sample size"

1

Cumming, Douglas, ed. The Oxford Handbook of IPOs. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780190614577.001.0001.

Full text
Abstract:
Firms generally begin as privately owned entities. When they grow large enough, the decision to go public and its consequences are among the most crucial times in a firm’s life cycle. The first time a firm is a reporting issuer gives rise to tremendous responsibilities about disclosing public information and accountability to a wide array of retail shareholders and institutional investors. Initial public offerings (IPOs) offer tremendous opportunities to raise capital. The economic and legal landscape for IPOs has been rapidly evolving across countries. There have been fewer IPOs in the United States in the aftermath of the 2007–2009 financial crisis and associated regulatory reforms that began in 2002. In 1980–2000, an average of 310 firms went public every year, while in 2001–2014 an average of 110 firms went public every year. At the same time, there are so many firms that seek an IPO in China that there has been a massive waiting list of hundreds of firms in recent years. Some countries are promoting small junior stock exchanges to go public early, and even crowdfunding to avoid any prospectus disclosure. Financial regulation of analysts and investment banks has been evolving in ways that drastically impact the economics of going public—in some countries, such as the United States, drastically increasing the minimum size of a company before it can expect to go public. This Handbook not only systematically and comprehensively consolidates a large body of literature on IPOs, but provides a foundation for future debates and inquiry.
APA, Harvard, Vancouver, ISO, and other styles
2

Trieloff, Mario. Noble Gases. Oxford University Press, 2017. http://dx.doi.org/10.1093/acrefore/9780190647926.013.30.

Full text
Abstract:
This is an advance summary of a forthcoming article in the Oxford Encyclopedia of Planetary Science. Please check back later for the full article.Although the second most abundant element in the cosmos is helium, noble gases are also called rare gases. The reason is that they are not abundant on terrestrial planets like our Earth, which is characterized by orders of magnitude depletion of—particularly light—noble gases when compared to the cosmic element abundance pattern. Indeed, such geochemical depletion and enrichment processes make noble gases so versatile concerning planetary formation and evolution: When our solar system formed, the first small grains started to adsorb small amounts of noble gases from the protosolar nebula, resulting in depletion of light He and Ne when compared to heavy noble gases Ar, Kr, and Xe: the so-called planetary type abundance pattern. Subsequent flash heating of the first small mm to cm-sized objects (chondrules and calcium, aluminum rich inclusions) resulted in further depletion, as well as heating—and occasionally differentiation—on small planetesimals, which were precursors of larger planets and which we still find in the asteroid belt today from where we get rocky fragments in form of meteorites. In most primitive meteorites, we even can find tiny rare grains that are older than our solar system and condensed billions of years ago in circumstellar atmospheres of, for example, red giant stars. These grains are characterized by nucleosynthetic anomalies and particularly identified by noble gases, for example, so-called s-process xenon.While planetesimals acquired a depleted noble gas component strongly fractionated in favor of heavy noble gases, the sun and also gas giants like Jupiter attracted a much larger amount of gas from the protosolar nebula by gravitational capture. This resulted in a cosmic or “solar type” abundance pattern, containing the full complement of light noble gases. Contrary to Jupiter or the sun, terrestrial planets accreted from planetesimals with only minor contributions from the protosolar nebula, which explains their high degree of depletion and basically “planetary” elemental abundance pattern. Indeed this depletion enables another tool to be applied in noble gas geo- and cosmochemistry: ingrowth of radiogenic nuclides. Due to heavy depletion of primordial nuclides like 36Ar and 130Xe, radiogenic ingrowth of 40Ar by 40K decay, 129Xe by 129I decay, or fission Xe from 238U or 244Pu decay are precisely measurable, and allow insight in the chronology of fractionation of lithophile parent nuclides and atmophile noble gas daughters, mainly caused by mantle degassing and formation of the atmosphere.Already the dominance of 40Ar in the terrestrial atmosphere allowed C. F v. Weizsäcker to conclude that most of the terrestrial atmosphere originated by degassing of the solid Earth, which is an ongoing process today at mid ocean ridges, where primordial helium leaves the lithosphere for the first time. Mantle degassing was much more massive in the past; in fact, most of the terrestrial atmosphere formed during the first 100 million years of Earth´s history, and was completed at about the same time when the terrestrial core formed and accretion was terminated by a giant impact that also formed our moon. However, before that time, somehow also tiny amounts of solar noble gases managed to find their way into the mantle, presumably by solar wind irradiation of small planetesimals or dust accreting to Earth. While the moon-forming impact likely dissipated the primordial atmosphere, today´s atmosphere originated by mantle degassing and a late veneer with asteroidal and possibly cometary contributions. As other atmophile elements behave similar to noble gases, they also trace the origin of major volatiles on Earth, for example, water, nitrogen, sulfur, and carbon.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Massive sample size"

1

Govindarajan, M. "Challenges in Big Data Analysis." In Encyclopedia of Information Science and Technology, Fifth Edition. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-3479-3.ch041.

Full text
Abstract:
Big data brings new opportunities to modern society and challenges to data scientists. On one hand, big data holds great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of big data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. Prior to data analysis, data must be well constructed. However, considering the variety of datasets in big data, the efficient representation, access, and analysis of unstructured or semi-structured data are still challenging. Understanding the method by which data can be preprocessed is important to improve data quality and the analysis results. The purpose of this chapter is to highlight the big data challenges and also provide a brief description of each challenge.
APA, Harvard, Vancouver, ISO, and other styles
2

Srivastava, Meenakshi. "A Surrogate Data-Based Approach for Validating Deep Learning Model Used in Healthcare." In Applications of Deep Learning and Big IoT on Personalized Healthcare Services. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-2101-4.ch009.

Full text
Abstract:
IoT-based communication between medical devices has encouraged the healthcare industry to use automated systems which provide effective insight from the massive amount of gathered data. AI and machine learning have played a major role in the design of such systems. Accuracy and validation are considered, since copious training data is required in a neural network (NN)-based deep learning model. This is hardly feasible in medical research, because the size of data sets is constrained by complexity and high cost experiments. The availability of limited sample data validation of NN remains a concern. The prediction of outcomes on a NN trained on a smaller data set cannot guarantee performance and exhibits unstable behaviors. Surrogate data-based validation of NN can be viewed as a solution. In the current chapter, the classification of breast tissue data by a NN model has been detailed. In the absence of a huge data set, a surrogate data-based validation approach has been applied. The discussed study can be applied for predictive modelling for applications described by small data sets.
APA, Harvard, Vancouver, ISO, and other styles
3

Greenlaw, Raymond, H. James Hoover, and Walter L. Ruzzo. "Closing Remarks." In Limits to Parallel Computation. Oxford University Press, 1995. http://dx.doi.org/10.1093/oso/9780195085914.003.0015.

Full text
Abstract:
The previous chapters have laid out the history, foundations, and mechanics of the theory of P-completeness. We have shown that this theory plays roughly the same role in the parallel complexity domain as , NP-completeness does in the sequential domain. Having devoted much effort to establishing the notion of feasible highly parallel algorithms and arguing that P-completeness captures the notions of inherently sequential problems and algorithms, it is now appropriate to temper our case a bit with some additional observations. For some problems depending on the relevant input size, it may not be worth the effort to search for a feasible highly parallel algorithm assuming for example that you already have a √n time parallel algorithm. The following table shows the relationship between square roots and logarithms for various input sizes. Of course, for small input sizes the constants on the running times also play a major role. Although it is extremely risky to predict hardware trends, it seems safe to say that massively parallel computers containing billions of processors are not "just around the corner" and although potentially feasible, machines with millions of processors are not soon to become commodity personal computers. Thus, highly parallel algorithms will not be feasible if the processor requirements for an input of size n are much greater than n2, and probably more like n log n. Even if you have sufficient numbers of processors for problems that interest you, your algorithm may succumb to the tyranny of asymptotics. For example, a parallel algorithm that uses √n time is probably preferable to one that uses (logn)4 time, at least for values of n less than 1013. As Table 11.1 illustrates, the only really practical polylogarithmic parallel time algorithms are O((logn)2). Perhaps the limit to feasible highly parallel algorithms are those that run in (logn)2 time and use O(n2) processors. However, the search for an NC algorithm often leads to new insights into how a problem can be effectively parallelized. That is, a problem frequently is found to exhibit unexpected parallelism when the limits of its parallelism are pushed.
APA, Harvard, Vancouver, ISO, and other styles
4

Taillant, Jorge Daniel. "Life Without Glaciers." In Glaciers. Oxford University Press, 2015. http://dx.doi.org/10.1093/oso/9780199367252.003.0011.

Full text
Abstract:
Climate change is accelerating glacier melt. In the same month that this book first went to the editors, scientists reported the irreversible collapse of a massive portion of the West Antarctic ice sheet at Thwaites Glacier. Thwaites Glacier had already been news years earlier when a massive piece of ice 50 km (31 mi) wide, nearly 150 km (93 mi) long, and 3 km (1.8 mi) thick—that’s more than thirty city blocks of ice stacked on top of each other—broke off into the ocean and became Thwaites iceberg. Imagine an ice cube about seventy-five times the size of Manhattan Island floating away into the ocean. With the new reported collapse, the entire West Antarctic ice sheet has now entered into a rapid and irreversible melting phase (Figure 6.1). Thwaites Glacier, as well as others in the Amundsen Bay sector, such as the Pine Island Glacier, form part of a massive ice sheet on Antarctica that is falling to pieces. This is an ice sheet larger than France, Spain, Germany, and Italy combined, and it contains nearly 30 million cubic kilometers of ice (that’s about seven million cubic miles; Gosnell, 2005, p. 109). As these colossal ice bodies fall into the warmer ocean, they will begin to melt away, eventually raising global sea levels by about 1.2 meters (4 ft) (Figure 6.2). The breakdown has come much more quickly than expected and has now entered into an irreversible “runaway process.” What should have taken thousands of years in the natural evolution of things will now be complete in just centuries or less. The Pine Island Glacier is a long, flowing ice stream in the northeastern part of Amundsen Bay, and it is the world’s greatest contributor of ice to the oceans through melting and calving processes. It is also another of the glaciers at risk of collapsing entirely into the ocean. Thwaites Glacier’s collapse is an indicator that the whole ice sheet may be in imminent danger.
APA, Harvard, Vancouver, ISO, and other styles
5

Gleason, Philip. "Assimilative Tendencies and Curricular Crosscurrents." In Contending with Modernity. Oxford University Press, 1996. http://dx.doi.org/10.1093/oso/9780195098280.003.0018.

Full text
Abstract:
Besides its massive impact on the institutional side of Catholic higher education, World War II affected the thinking of Catholic educators. We have already touched upon this dimension in noting how the war and postwar growth required them to expand their horizons and redouble their efforts in research, fundraising, and administration generally. Here we look more closely at how Catholics were affected by the great ideological revival of democracy that accompanied the war. This kind of influence was sometimes explicitly noted by Catholic leaders, as when Archbishop Richard Gushing of Boston called attention to the “neo-democratic mentality of returning servicemen and the university-age generation generally”; others recognized that it created problems since the Catholic church was so widely perceived as incompatible with democracy and “the American way of life.” We shall postpone examination of controversies stemming from this source to the next chapter, turning our attention in this one to the assimilative tendencies reflected in Catholics’ new appreciation for liberal democratic values, and to the major curricular concerns of the era which were also affected by the war. In no area did the democratic revival have a more profound long range effect than in the impetus it lent to the movement for racial equality and civil rights for African Americans. The publication in 1944 of Gunnar Myrdal’s An American Dilemma marked an epoch in national understanding of what the book’s subtitle called “the Negro problem and modern democracy.” Myrdal himself stressed the importance of the wartime context, which made it impossible to ignore racial discrimination at home while waging war against Nazi racism. At the same time, increasing black militance, the massive migration of African Americans to northern industrial centers, and above all the great Detroit race riot of 1943—reinforced by the anti-Mexican “Zoot Suit” riots in Los Angeles the same summer—suddenly made the improvement of race relations an imperative for American society as a whole. By the end of the war, no fewer than 123 national organizations were working actively to “reduce intergroup tensions,” and the civil rights movement began a steady advance that led directly to the great judicial and political victories it won in the fifties and sixties.
APA, Harvard, Vancouver, ISO, and other styles
6

Erçetin, Şefika Şule, and Şuay Nilhan Açıkalın. "Is President Erdoğan Really a Dictator?" In Advances in Religious and Cultural Studies. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-5225-0148-0.ch001.

Full text
Abstract:
The notion of dictatorship has been central in leadership exegeses the world over. Indeed, almost all leaders are alleged to be dictators at a certain point in time, once they side step expectations abound. Like in many, a country, talk of President Recep Tayyip Erdogan being a dictator in Turkey has been massive over the years. The interesting conundrum though defeating all analyses of such nature is the authority on which the claim of dictatorship owes its abode or rests. One wonders whether a leader's being a dictator is determined by opposition politicians in a country, the local media, the international media, foreign politicians or the local masses (those benefitting directly or indirectly and not). It is also interesting to question the yardstick used for justification of the same; whether it is simply over stay in power, the character and appearance of a leader or the modus operandi of a leader. This conceptual paper, therefore explored the so called Erdogan dictatorship illusion of opposition parties in Turkey by examining the concept of dictatorship in leadership, Erdogan's assumed dictatorial accusations, and an effort toward disengagement of Erdogan from dictatorship claims. The paper has also shown that there are some dictatorship tendencies within opposition parties.
APA, Harvard, Vancouver, ISO, and other styles
7

Gordon, Robert B. "Independent Artisans." In A Landscape Transformed. Oxford University Press, 2000. http://dx.doi.org/10.1093/oso/9780195128185.003.0006.

Full text
Abstract:
By 1730 New England colonists needed increasingly large amounts of iron for their expanding economy. Shipsmiths forged iron fastenings used in the vessels they built for the coastal and West Indian trades. The mariners who sailed these ships wanted large, strong iron anchors. Millwrights needed waterwheel axles and gudgeons, spindles, and numerous other iron components for gristmills, sawmills, fulling mills, and oil mills. Builders of the forges and furnaces that smelted and shaped iron products had to have iron hammerheads and forge plates. The pioneers on the frontier in New York and northern New England wanted massive iron kettles for boiling potash, usually the first cash crop they got off their newly cleared land. Everyone needed nails. Building a bloomery forge offered an adventurer in Connecticut’s Western Lands the easiest way to start making iron. One man could run a forge, although a helper made the work easier. The bloomery proprietor needed less capital than would be required for other types of ironworks. The region had plenty of easily developed water privileges of the right size to power a bloomery forge. Although it took skill and practice to make high-quality metal, a forge owner or hired hand could learn enough of the bloom smelting technique from an experienced smith within a few months to make serviceable metal. Iron of ordinary quality satisfied most people’s needs in the early days of the northwest. If the weather were bad, ore or fuel were unavailable, crops demanded attention, or the market of iron were slow, the proprietor could easily shut down his forge at short notice and restart it as soon as conditions improved. Although a bloomery forge could be part of an enterprise employing fifty or more hands, it could also be little more than a smithy in size and complexity. A farmer could accumulate enough money to build one. Alternatively, a number of individuals might take shares in a forge run by a single artisan. The proprietors of a mercantile business, or of grist or sawmills on the same or a nearby water privilege, could easily add a bloomery to their other enterprises.
APA, Harvard, Vancouver, ISO, and other styles
8

Lowenstam, Heinz A., and Stephen Weiner. "Cnidaria." In On Biomineralization. Oxford University Press, 1989. http://dx.doi.org/10.1093/oso/9780195049770.003.0007.

Full text
Abstract:
The phylum Cnidaria or Coelenterates includes sea anemones, jellyfish, hydras, sea fans, and, of course, the corals. With few exceptions they are all marine organisms and most are inhabitants of shallow water. In spite of the great variation in shape, size, and mode of life, they all possess the same basic metazoan structural features: an internal space for digestion (gastrovascular cavity or coelenteran), a mouth, and a circle of tentacles, which are really just an extension of the body wall. The body wall in turn is composed of three layers: an outer layer of epidermis, an inner layer of cells lining the gastrovascular cavity, and, sandwiched between them, a so-called mesoglea (Barnes 1980). All these features are present in both of the basic structural types: the sessile polyp and the free-swiming medusa. During their life cycle, some cnidarians exhibit one or the other structural type whereas others pass through both. Most Cnidaria have no mineralized deposits. The ones that, to date, are known to have mineralized deposits are listed in Table 5.1. They are found in both the free-swimming medusae and the sessile polyps. Not surprisingly, these have very different types of mineralized deposits. In the medusae they are located exclusively within the statocyst where they constitute an important part of the organism’s gravity perception apparatus. Interestingly the statoconia of the Hydrozoa, examined to date for their major elemental compositions only, are all composed of amorphous Mg-Ca-phosphate, whereas those of the Scyphozoa and Cubozoa are composed of calcium sulfate. Calcium sulfate minerals (presumably gypsum) are not commonly formed by organisms and the only other known occurrence is in the Gamophyta among the Protoctista. Spangenberg (1976) and her colleagues have expertly documented this phenomenon in the Cnidaria. (For a more detailed discussion of mineralization and gravity perception see Chapter 11.) The predominant mineralized hard part associated with the sessile polyps is skeletal. These can take the form of skeletons composed of individual spicules, spicule aggregates, or massive skeletons. They are composed of aragonite, calcite, or both.
APA, Harvard, Vancouver, ISO, and other styles
9

Parfentievich Maletskyy, Anatoliy, Yuriy Markovich Samchenko, and Natalia Mikhailivna Bigun. "Improving the Antitumor Effect of Doxorubicin in the Treatment of Eyeball and Orbital Tumors." In Advances in Precision Medicine Oncology. IntechOpen, 2021. http://dx.doi.org/10.5772/intechopen.95080.

Full text
Abstract:
Malignant tumors of the orbit are the main cause for 41–45.9% of orbital tumor, and they will threaten both the organ of vision and the life of the patient. In our opinion, improving the effectiveness of treatment of malignant tumors can be implemented in the following areas: a) immobilization of doxorubicin in synthetic polymeric materials, which will fill the tissue structures that were resected and reduce the percentage of tumor recurrence. b) the use of nanomaterials for the delivery of doxorubicin to tumor cells. To develop a hydrogel implant and nanoparticles, to study the diffusion kinetics of doxorubicin in a hydrogel implant and the ability of nanoparticles to transport doxorubicin. The developed gels based on acrylic acid (AAc) were obtained by radical polymerization of an aqueous solution of monomers (AAc and N, N-methylenebisacrylamide (MBA)) at a temperature of 70°C. Matrices based on polyvinyl formal (PVF) were obtained by treatment of polyvinyl alcohol (PVA) with formaldehyde in the presence of a strong acid. Experimental studies were performed on rabbits of the Chinchilla breed, weighing 2–3 kg, aged 5–6 months, which during the study were in the same conditions. We implanted the hybrid gel in the scleral sac; orbital tissue and in the ear tissue of rabbits: Evaluation of the response of soft tissues and bone structures to implant materials was carried out on the basis of analysis of changes in clinical and pathomorphological parameters was performed after 10, 30 and 60 days. Diffusion of doxorubicin was examined by using UV spectroscopy [spectrophotometer-fluorimeter DS-11 FX + (DeNovix, USA)], analyzing samples at regular intervals during the day at a temperature of 25° C. The concentration of active substances was determined by the normalized peak absorption of doxorubicin at 480 nm. The release kinetics of the antitumor drug doxorubicin were investigated by using a UV spectrometer “Specord M 40” (maximum absorption 480 nm). The developed hydrogel implant has good biocompatibility and germination of surrounding tissues in the structure of the implant, as well as the formation of a massive fibrous capsule around it. An important advantage of the implant is also the lack of its tendency to resorption. Moreover, the results showed that the diffusion kinetics of doxorubicin from a liquid-crosslinked hydrogel reaches a minimum therapeutic level within a few minutes, while in the case of a tightly crosslinked - after a few hours. It was also found that the liquid-crosslinked hydrogel adsorbs twice as much as the cytostatic - doxorubicin. The analysis of the research results approved that the size of the nanoparticles is the main factor for improving drug delevary and penetration. Thus, nanoparticles with a diameter of less than 200 nm can penetrate into cells and are not removed from the circulatory system by macrophages, thereby prolonging their circulation in the body. About 10 nm. The developed hybrid hydrogel compositions have high mechanical strength, porosity, which provides 100% penetration of doxorubicin into experimental animal tissues. It was found that the kinetics of diffusion of drugs from liquid-crosslinked hydrogel reaches a minimum therapeutic level within a few minutes, whereas in the case of densely crosslinked hydrogel diffusion begins with a delay of several hours and the amount of drug released at equilibrium reaches much lower values (20–25%). The obtained preliminary experimental results allow us to conclude that our developed pathways for the delivery of drugs, in particular, doxorubicin to tumor cells will increase the effectiveness of antitumor therapy.
APA, Harvard, Vancouver, ISO, and other styles
10

Ehrenfeld, David. "Obsolescence." In Swimming Lessons. Oxford University Press, 2002. http://dx.doi.org/10.1093/oso/9780195148527.003.0016.

Full text
Abstract:
At the end of the Cretaceous period, the last dinosaurs disappeared from the earth, setting off an evolutionary jubilee among the Milquetoast-like mammals that survived them, and preparing the ground for what was to become, 65 million years later, a permanent source of gainful occupation for scientists whose job it is to wonder why the dinosaurs died out. Scores of reasons have been given for this remarkable concatenation of extinctions. Global climate and sea level were changed by a city-sized asteroid striking the earth near what is now the Yucatan, or by a massive set of volcanic eruptions, or by the solar system passing through the core of a giant molecular cloud, perhaps colliding with a supercomet loosened from the Oort cluster, which orbits the Sun beyond Pluto. Theories of catastrophic extinction abound. Some of the most daring even conjure up the specter of an unseen companion star to our Sun, named Nemesis, whose eccentric orbit brings a wave of potentially deadly comet showers—and extinctions—every 26 million years. But there are also paleontologists who argue that the dinosaurs went away gradually, not suddenly, over a period of millions of years, and that toward the end they coexisted with the earliest hooved mammals, including ancestors of horses, cows, and sheep. If extinction was gradual, a different line of thought opens up: perhaps the dinosaurs died out because they couldn’t adapt and compete in a changing world. The big lummoxes were obsolete. I heard about the dinosaurs’ obsolescence back in my student days. It was as satisfying a notion then as it is today, especially if you didn’t think about it too hard. Here were these lumbering, pea-brained reptiles, barely able to walk and chew gum at the same time, while all around and underneath them, cleverly hiding behind clumps of primitive vegetation and cleverly burrowing in tunnels in the ground, were the nerdy but smart little mammals about to emerge from the shadows and begin their ascent to glory—somewhat, it occurs to me now, like Bill Gates in the waning days of heavy manufacturing.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Massive sample size"

1

Shen, Zebang, Hui Qian, Tongzhou Mu, and Chao Zhang. "Accelerated Doubly Stochastic Gradient Algorithm for Large-scale Empirical Risk Minimization." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/378.

Full text
Abstract:
Nowadays, algorithms with fast convergence, small memory footprints, and low per-iteration complexity are particularly favorable for artificial intelligence applications. In this paper, we propose a doubly stochastic algorithm with a novel accelerating multi-momentum technique to solve large scale empirical risk minimization problem for learning tasks. While enjoying a provably superior convergence rate, in each iteration, such algorithm only accesses a mini batch of samples and meanwhile updates a small block of variable coordinates, which substantially reduces the amount of memory reference when both the massive sample size and ultra-high dimensionality are involved. Specifically, to obtain an ε-accurate solution, our algorithm requires only O(log(1/ε)/sqrt(ε)) overall computation for the general convex case and O((n+sqrt{nκ})log(1/ε)) for the strongly convex case. Empirical studies on huge scale datasets are conducted to illustrate the efficiency of our method in practice.
APA, Harvard, Vancouver, ISO, and other styles
2

Lim, Shi Ying Candice, Bradley Adam Camburn, Diana Moreno, Zack Huang, and Kristin Wood. "Design Concept Structures in Massive Group Ideation." In ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/detc2016-59805.

Full text
Abstract:
Empirical work in design science has highlighted that the process of ideation can significantly affect design outcome. Exploring the design space with both breadth and depth increases the likelihood of achieving better design outcomes. Furthermore, iteratively attempting to solve challenging design problems in large groups over a short time period may be more effective than protracted exploration by an isolated set of individuals. There remains a substantial opportunity to explore the structure of various design concept sets. In addition, many empirical studies cap analysis at sample sizes of less than one hundred individuals. This has provided substantial, though partial, models of the ideation space. This work explores one new territory in large scale ideation. Two conditions are evaluated. In the first condition, an ideation session was run with 2400 practicing designers and engineers from one organization. In the second condition 1000 individuals ideate on the same problem in a completely distributed environment and without awareness of each other. We compare properties of solution sets produced by each of these groups and activities. Analytical tools from network modeling theory are applied as well as traditional ideation metrics such as concept binning with saturation analysis. Structural network modeling is applied to evaluate the interconnectivity of design concepts. This is a strictly quantitative, and at the same time graphically expressive, means to evaluate the diversity of a design solution set. Observations indicate that the group condition approached saturation of distinct categories more rapidly than the individual, distributed condition. The total number of solution categories developed in the group condition was also higher. Additionally, individuals generally provided concepts across a greater number of solution categories in the group condition. The indication for design practice is that groups of just under forty individuals would provide category saturation within group ideation for a system level design, while distributed individuals may provide additional concept differentiation. This evidence can support development of more systematic ideation strategies. Furthermore, we provide an algorithmic approach for quantitative evaluation of variety in design solution sets using networking analysis techniques. These methods can be used in complex or wicked problems, and system development where the design space is vast.
APA, Harvard, Vancouver, ISO, and other styles
3

Demanze, Fre´de´ric, Didier Hanonge, Alain Chalumeau, and Olivier Leclerc. "Fatigue Life Analysis of Polyurethane Bending Stiffeners." In ASME 2005 24th International Conference on Offshore Mechanics and Arctic Engineering. ASMEDC, 2005. http://dx.doi.org/10.1115/omae2005-67506.

Full text
Abstract:
Following some experiences of bending stiffeners fatigue failures during full scale tests performed at Flexi France on flexible pipe and stiffener assemblies, Technip decided to launch in 1999 a major research program on fatigue life analysis of bending stiffeners made of Polyurethane material. This fatigue life assessment is now systematically performed by Technip for all new design of flexible riser bending stiffeners. This totally innovative method comprises a number of features as follows: Firstly fatigue behaviour of polyurethane material is described. The theoretical background, based on effective strain intensity factor, is detailed, together with experimental results on laboratory notched samples, solicited under strain control for various strain ratios, to obtain fatigue data. These fatigue data are well fitted by a power law defining the total number of cycles at break as a function of the effective strain intensity factor. The notion of fatigue threshold, below which no propagation is observed, is also demonstrated. Secondly the design used by Technip for its bending stiffeners, and most of all the critical areas regarding fatigue for these massive polyurethane structures are presented. Thirdly the methodology for fatigue life assessment of bending stiffeners in the critical areas defined above is discussed. Calibration of the strain calculation principle is presented versus finite element analysis. Based on all fatigue test results, the size of the equivalent notch to be considered at design stage, in the same critical areas, is discussed. Finally, a comprehensive calibration of the methodology according to full and middle scale test results is presented. The present paper is therefore a step forward in the knowledge of fatigue behaviour of massive polyurethane bending stiffener structures, which are critical items for flexible risers integrity, and widely used in the offshore industry. The confidence in bending stiffeners reliability is greatly enhanced by the introduction of this innovative methodology developed by Technip.
APA, Harvard, Vancouver, ISO, and other styles
4

Nakajima, Yasuharu, Joji Yamamoto, Shigeo Kanada, et al. "Study on Grinding Technology for Seafloor Mineral Processing." In ASME 2013 32nd International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/omae2013-10756.

Full text
Abstract:
Seafloor Massive Sulfides (SMSs), which are formed by precipitates from hydrothermal fluids vented from seafloor, have been expected as one of mineral resources to be developed. The authors have proposed the concept of seafloor mineral processing for SMS mining, where valuable minerals contained in SMS ores are separated on seafloor. To apply a ball mill to the grinding unit for seafloor mineral processing, grinding experiments were carried out using a small-scale ball mill applicable to high-pressure condition. In the experiments, wet grinding and water-filled grinding of size-classified silica sands were carried out at three rotation rates to compare the grinding performance in both cases. In both cases, the silica sands were finely ground. The measurement of particle size of samples from the experiments showed that water-filled grinding had comparable grinding performance to wet grinding while the suitable rotation rate for water-filled grinding shifted to higher than that for wet grinding. This result suggests the possibility of water-filled grinding for seafloor mineral processing. If water-filled grinding can be employed for the grinding unit, the structure of the grinding unit would be simplified in comparison with wet grinding that leads to the saving of grinding costs.
APA, Harvard, Vancouver, ISO, and other styles
5

Krishne Gowda, Y. T., Ravindra Holalu Venkatdas, and Vikram Chowdeswarally Krishnappa. "Flow Past Square Cylinders of Different Size With and Without Corner Modification." In ASME 2015 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/imece2015-50274.

Full text
Abstract:
In many mechanical engineering applications, separated flows often appear around any object such as tall buildings, monuments, and towers are permanently exposed to wind. Similarly, piers, bridge pillars, and legs of offshore platforms are continuously subjected to the load produced by maritime or fluvial streams. These bodies usually create a large region of separated flow and a massive unsteady wake region in the downstream. The highly asymmetric and periodic nature of flow in the downstream has attracted the attention of physicists, engineers and CFD practitioners. A lot of research work is carried out for a square cylinder but flow past square cylinders with and without corner modification work is not taken up. This motivated to take up the task of flow past two different sized square cylinders, numerically simulated. A Reynolds number of 100 and 200 is considered for the investigation. The flow is assumed to be two dimensional unsteady and incompressible. The computational methodology is carried out once the problem is defined the first step in solving the problem is to construct a geometry on which the simulation is planned. Once the geometry is constructed, proper assignment of its boundaries in accordance to the actual physical state is to be done. The various boundary options that are to be set. After setting the boundary types, the continuum type is set. The geometry is discretized into small control volumes. Once the surface mesh is completed, the mesh details are exported to a mesh file, then exported to Fluent, which is CFD solver usually run in background mode. This helps to prioritize the execution of the run. The run would continue until the required convergence criterion is reached or till the maximum number of iterations is completed. Results indicate, in case of chamfered and rounded corners in square cylinder, there is decrease in the wake width and thereby the lift and drag coefficient values. The form drag is reduced because of a higher average pressure downstream when separation is delayed by corner modification. The lift coefficients of Square cylinder with corner modification decreases but Strouhal number increases when compared with a square cylinder without corner modification. Strouhal number remains same even if magnitude of oscillations is increased while monitoring the velocity behind the cylinder. Frequency of vortex shedding decreases with the introduction of second cylinder either in the upstream or downstream of the first cylinder. As the centre distance between two cylinders i.e., pitch-to-perimeter ratio is increased to 6,the behavior of the flow almost approaches to that of flow past a square cylinder of with and without modification of same condition. When the perimeter of the upstream cylinder with and without modification is larger than the downstream cylinder, the size of the eddies is always bigger in between the cylinders compared to the downstream of the second cylinder. The flow velocity in between the cylinders with and without corner modification are less compared to the downstream of the second cylinder. As the distance increases, the flow velocity in between the cylinders become almost equal to the downstream of the second cylinder. The results are presented in the form of streamlines, flow velocity, pressure distribution. drag coefficient, lift coefficient and Strouhal number.
APA, Harvard, Vancouver, ISO, and other styles
6

Darbandi, Masoud, Ali Fatin, and Gerry E. Schneider. "Careful Parameter Study to Enhance the Effect of Injecting Heavy Fuel Oil Into a Crossflow Using Numerical Approaches." In ASME 2018 5th Joint US-European Fluids Engineering Division Summer Meeting. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/fedsm2018-83207.

Full text
Abstract:
The flow and spray parameters can have noticeable roles in heavy fuel oil (HFO) spray finesse. As known, the interaction between droplets and cross flow should be considered carefully in many different industrial applications such as the process burners and gas turbine combustors. So, it would be so important to investigate the effect of injecting HFO into a crossflow more subtly. In this work, the effects of various flow and spray parameters on the droplet breakup and dispersion parameters are investigated numerically using the finite-volume-element method. The numerical method consists of a number of different models to predict the droplets breakup and their dispersion into a cross flow including the spray-turbulence interaction one. An Eulerian–Lagrangian approach, which suitably models the interaction between the droplets and turbulence, and also models the droplets secondary breakup is used to investigate the interactions between the flow and the droplet behaviors. After validating the computational method via comparing them with the data provided by the past researches, four test cases with varying swirl number, air axial velocity, droplet size, and fuel injection velocity are examined to find out the effects of preceding parameters on some spray characteristics including the droplets path, sauter mean diameter (SMD), and dispersed phase mass concentration. The results show that the droplets inertia and the flow velocity magnitude have significant effects on spray characteristics. As the droplets become more massive, the deflection of spray in flow direction becomes less. Also, increasing of flow velocity causes more deflection for sprays with the same droplet sizes.
APA, Harvard, Vancouver, ISO, and other styles
7

Masanobu, Sotaro, Satoru Takano, Shigeo Kanada, Masao Ono, and Hiroki Sasagawa. "Experimental Investigation of Large Particle Slurry Transport in Vertical Pipes With Pulsating Flow." In ASME 2020 39th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/omae2020-18194.

Full text
Abstract:
Abstract For subsea mining, it is important to predict the pressure loss in oscillating pipes with pulsating flow for the safe and reliable operation of ore lifting. In the present paper, the authors focused on the pulsating internal flow in static vertical pipe and carried out slurry transport experiment to investigate the effects of flow fluctuation on the pressure loss. The alumina beads and glass beads were used as the solid particles in the experiment, and the fluctuating periods and amplitudes of pulsating water flow were varied. The time-averaged pressure losses calculated by the prediction method for the steady flow proposed in the past by the authors agreed well with the experimental ones. As for the fluctuating component of pressure loss, the calculation results using the quasi-steady expression of a mixture model were compared with the experimental data. The calculated results were different from experimental ones for alumina beads of which densities are almost same as those of the ores of Seafloor Massive Sulfides. It suggests that the expression is insufficient to predict the pressure loss for heavy solid particles. The calculated ones, however, provided those in the safety side. On the other hand, the calculated results for light solid particles such as glass beads agreed well with the experimental ones. It means that the expression would be applicable to the prediction of pressure loss for the mining of manganese nodules which are lighter than the ores of Seafloor Massive Sulfides.
APA, Harvard, Vancouver, ISO, and other styles
8

Dernaika, Moustafa, Osama Al Jallad, Safouh Koronfol, et al. "Petrophysical and Fluid Flow Properties of a Tight Carbonate Source Rock Using Digital Rock Physics." In SPE Middle East Unconventional Resources Conference and Exhibition. SPE, 2015. http://dx.doi.org/10.2118/spe-172959-ms.

Full text
Abstract:
Abstract The evaluation of shale is complicated by the structurally heterogeneous nature of fine-grained strata and their intricate pore networks, which are interdependent on many geologic factors including total organic carbon (TOC) content, mineralogy, maturity and grain-size. The ultra-low permeability of the shale rock requires massive hydraulic fracturing to enhance connectivity and increase permeability for the flow. To design an effective fracturing technique, it is necessary to have a good understanding of the reservoir characteristics and fluid flow properties at multiple scales. In this work, representative core plug samples from a tight carbonate source rock in the Middle East were characterized at the core- and pore-scale levels using a Digital Rock Physics (DRP) workflow. The tight nature of the carbonate rocks prevented the use of conventional methods in measuring special core analysis (SCAL) data. Two-dimensional Scanning Electron Microscopy (SEM) and three-dimensional Focused Ion Beam (FIB)-SEM analysis were studied to characterize the organic matter content in the samples together with (organic and inorganic) porosity and matrix permeability. The FIB-SEM images in 3D were also used to determine petrophysical and fluid flow (SCAL) properties in primary drainage and imbibition modes. A clear trend was observed between porosity and permeability related to identified rock fabrics and organic matter in the core. The organic matter was found to have an effect on the imbibition two-phase flow relative permeability and capillary pressure behavior and hysteresis trends among the analyzed samples. The data obtained from DRP provided information that can enhance the understanding of the pore systems and fluid flow properties in tight formations, which cannot be derived accurately using conventional methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Zambonini, Gherardo, Xavier Ottavy, and Jochen Kriegseis. "Corner Separation Dynamics in a Linear Compressor Cascade." In ASME Turbo Expo 2016: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/gt2016-56454.

Full text
Abstract:
This paper considers the inherent unsteady behavior of the three dimensional separation in the corner region of a subsonic linear compressor cascade equipped of thirteen NACA 65-009 profile blades. Detailed experimental measurements were carried out at different sections in spanwise direction achieving, simultaneously, unsteady wall pressure signals on the surface of the blade and velocity fields by time-resolved PIV measurements. Two configurations of the cascade were investigated with an incidence of 4° and 7°, both at Re = 3.8 * 105 and Ma = 0.12 at the inlet of the facility. The intermittent switch between the two statistical preferred sizes of separation, large and almost suppressed, is called bimodal behaviour. The existence of such oscillation, reported at first in previous experimental and numerical works on the same test rig, is confirmed for both incidences. Additionally, the present PIV measurements provide, for the first time, time-resolved flow visualizations of the size switch of the separation with an extended field of view covering the entire blade section. The interaction of random large structures of the incoming boundary layer with the blade is found to be a predominant element that destabilizes the separation boundary. The recirculation region enlarges when these high vorticity perturbations blend with larger eddies situated in the aft part of the blade. Such massive separation persists until the blockage in the passage causes the breakdown of the largest structures in the aft part of the blade. The flow starts again to accelerate and the separation is almost suppressed. Finally, POD analysis is carried out to decompose flow modes and to contribute to the clarification of underlying cause-effect-relations, which predominate the dynamics of the present flow scenario.
APA, Harvard, Vancouver, ISO, and other styles
10

Luan, Yigang, Yang Ni, Huannan Liu, Shi Bu, and Haiou Sun. "Investigation of Performance of High Velocity Air Intake Wave-Plate Separators for Marine Gas Turbines." In ASME Turbo Expo 2016: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/gt2016-56220.

Full text
Abstract:
Due to the advantages of great power, small size and light weight, the application of gas turbines in marine power facilities has increased a lot over time. Concerning the factors of off-shore operating environments which include the huge air intake quantity, the massive mist marine circumstance and the precious vessel available capacity, this paper combined experimental study with numerical simulation to investigate the performance of a type of new air intake wave-plate separator which can be applied under high intake velocity conditions. The total pressure drop was measured in a small wind tunnel with the inlet velocities of the separators ranging from 1.0 to 15.0 m/s. The resistance characteristics of high velocity wave-plate separators were simulated under the same velocity range described above. The separation efficiencies of high velocity wave-plate separators were simulated under the inlet velocity of 14.0 m/s, and the liquid diameters were 5μm, 10μm, 15μm and 20μm respectively. By analyzing the results of experiments and simulations, this paper draws the conclusion that high velocity wave-plate separators can keep high separation efficiencies and acceptable total pressure drop under high inlet velocities.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Massive sample size"

1

Vargas-Herrera, Hernando, Juan Jose Ospina-Tejeiro, Carlos Alfonso Huertas-Campos, et al. Monetary Policy Report - April de 2021. Banco de la República de Colombia, 2021. http://dx.doi.org/10.32468/inf-pol-mont-eng.tr2-2021.

Full text
Abstract:
1.1 Macroeconomic summary Economic recovery has consistently outperformed the technical staff’s expectations following a steep decline in activity in the second quarter of 2020. At the same time, total and core inflation rates have fallen and remain at low levels, suggesting that a significant element of the reactivation of Colombia’s economy has been related to recovery in potential GDP. This would support the technical staff’s diagnosis of weak aggregate demand and ample excess capacity. The most recently available data on 2020 growth suggests a contraction in economic activity of 6.8%, lower than estimates from January’s Monetary Policy Report (-7.2%). High-frequency indicators suggest that economic performance was significantly more dynamic than expected in January, despite mobility restrictions and quarantine measures. This has also come amid declines in total and core inflation, the latter of which was below January projections if controlling for certain relative price changes. This suggests that the unexpected strength of recent growth contains elements of demand, and that excess capacity, while significant, could be lower than previously estimated. Nevertheless, uncertainty over the measurement of excess capacity continues to be unusually high and marked both by variations in the way different economic sectors and spending components have been affected by the pandemic, and by uneven price behavior. The size of excess capacity, and in particular the evolution of the pandemic in forthcoming quarters, constitute substantial risks to the macroeconomic forecast presented in this report. Despite the unexpected strength of the recovery, the technical staff continues to project ample excess capacity that is expected to remain on the forecast horizon, alongside core inflation that will likely remain below the target. Domestic demand remains below 2019 levels amid unusually significant uncertainty over the size of excess capacity in the economy. High national unemployment (14.6% for February 2021) reflects a loose labor market, while observed total and core inflation continue to be below 2%. Inflationary pressures from the exchange rate are expected to continue to be low, with relatively little pass-through on inflation. This would be compatible with a negative output gap. Excess productive capacity and the expectation of core inflation below the 3% target on the forecast horizon provide a basis for an expansive monetary policy posture. The technical staff’s assessment of certain shocks and their expected effects on the economy, as well as the presence of several sources of uncertainty and related assumptions about their potential macroeconomic impacts, remain a feature of this report. The coronavirus pandemic, in particular, continues to affect the public health environment, and the reopening of Colombia’s economy remains incomplete. The technical staff’s assessment is that the COVID-19 shock has affected both aggregate demand and supply, but that the impact on demand has been deeper and more persistent. Given this persistence, the central forecast accounts for a gradual tightening of the output gap in the absence of new waves of contagion, and as vaccination campaigns progress. The central forecast continues to include an expected increase of total and core inflation rates in the second quarter of 2021, alongside the lapse of the temporary price relief measures put in place in 2020. Additional COVID-19 outbreaks (of uncertain duration and intensity) represent a significant risk factor that could affect these projections. Additionally, the forecast continues to include an upward trend in sovereign risk premiums, reflected by higher levels of public debt that in the wake of the pandemic are likely to persist on the forecast horizon, even in the context of a fiscal adjustment. At the same time, the projection accounts for the shortterm effects on private domestic demand from a fiscal adjustment along the lines of the one currently being proposed by the national government. This would be compatible with a gradual recovery of private domestic demand in 2022. The size and characteristics of the fiscal adjustment that is ultimately implemented, as well as the corresponding market response, represent another source of forecast uncertainty. Newly available information offers evidence of the potential for significant changes to the macroeconomic scenario, though without altering the general diagnosis described above. The most recent data on inflation, growth, fiscal policy, and international financial conditions suggests a more dynamic economy than previously expected. However, a third wave of the pandemic has delayed the re-opening of Colombia’s economy and brought with it a deceleration in economic activity. Detailed descriptions of these considerations and subsequent changes to the macroeconomic forecast are presented below. The expected annual decline in GDP (-0.3%) in the first quarter of 2021 appears to have been less pronounced than projected in January (-4.8%). Partial closures in January to address a second wave of COVID-19 appear to have had a less significant negative impact on the economy than previously estimated. This is reflected in figures related to mobility, energy demand, industry and retail sales, foreign trade, commercial transactions from selected banks, and the national statistics agency’s (DANE) economic tracking indicator (ISE). Output is now expected to have declined annually in the first quarter by 0.3%. Private consumption likely continued to recover, registering levels somewhat above those from the previous year, while public consumption likely increased significantly. While a recovery in investment in both housing and in other buildings and structures is expected, overall investment levels in this case likely continued to be low, and gross fixed capital formation is expected to continue to show significant annual declines. Imports likely recovered to again outpace exports, though both are expected to register significant annual declines. Economic activity that outpaced projections, an increase in oil prices and other export products, and an expected increase in public spending this year account for the upward revision to the 2021 growth forecast (from 4.6% with a range between 2% and 6% in January, to 6.0% with a range between 3% and 7% in April). As a result, the output gap is expected to be smaller and to tighten more rapidly than projected in the previous report, though it is still expected to remain in negative territory on the forecast horizon. Wide forecast intervals reflect the fact that the future evolution of the COVID-19 pandemic remains a significant source of uncertainty on these projections. The delay in the recovery of economic activity as a result of the resurgence of COVID-19 in the first quarter appears to have been less significant than projected in the January report. The central forecast scenario expects this improved performance to continue in 2021 alongside increased consumer and business confidence. Low real interest rates and an active credit supply would also support this dynamic, and the overall conditions would be expected to spur a recovery in consumption and investment. Increased growth in public spending and public works based on the national government’s spending plan (Plan Financiero del Gobierno) are other factors to consider. Additionally, an expected recovery in global demand and higher projected prices for oil and coffee would further contribute to improved external revenues and would favor investment, in particular in the oil sector. Given the above, the technical staff’s 2021 growth forecast has been revised upward from 4.6% in January (range from 2% to 6%) to 6.0% in April (range from 3% to 7%). These projections account for the potential for the third wave of COVID-19 to have a larger and more persistent effect on the economy than the previous wave, while also supposing that there will not be any additional significant waves of the pandemic and that mobility restrictions will be relaxed as a result. Economic growth in 2022 is expected to be 3%, with a range between 1% and 5%. This figure would be lower than projected in the January report (3.6% with a range between 2% and 6%), due to a higher base of comparison given the upward revision to expected GDP in 2021. This forecast also takes into account the likely effects on private demand of a fiscal adjustment of the size currently being proposed by the national government, and which would come into effect in 2022. Excess in productive capacity is now expected to be lower than estimated in January but continues to be significant and affected by high levels of uncertainty, as reflected in the wide forecast intervals. The possibility of new waves of the virus (of uncertain intensity and duration) represents a significant downward risk to projected GDP growth, and is signaled by the lower limits of the ranges provided in this report. Inflation (1.51%) and inflation excluding food and regulated items (0.94%) declined in March compared to December, continuing below the 3% target. The decline in inflation in this period was below projections, explained in large part by unanticipated increases in the costs of certain foods (3.92%) and regulated items (1.52%). An increase in international food and shipping prices, increased foreign demand for beef, and specific upward pressures on perishable food supplies appear to explain a lower-than-expected deceleration in the consumer price index (CPI) for foods. An unexpected increase in regulated items prices came amid unanticipated increases in international fuel prices, on some utilities rates, and for regulated education prices. The decline in annual inflation excluding food and regulated items between December and March was in line with projections from January, though this included downward pressure from a significant reduction in telecommunications rates due to the imminent entry of a new operator. When controlling for the effects of this relative price change, inflation excluding food and regulated items exceeds levels forecast in the previous report. Within this indicator of core inflation, the CPI for goods (1.05%) accelerated due to a reversion of the effects of the VAT-free day in November, which was largely accounted for in February, and possibly by the transmission of a recent depreciation of the peso on domestic prices for certain items (electric and household appliances). For their part, services prices decelerated and showed the lowest rate of annual growth (0.89%) among the large consumer baskets in the CPI. Within the services basket, the annual change in rental prices continued to decline, while those services that continue to experience the most significant restrictions on returning to normal operations (tourism, cinemas, nightlife, etc.) continued to register significant price declines. As previously mentioned, telephone rates also fell significantly due to increased competition in the market. Total inflation is expected to continue to be affected by ample excesses in productive capacity for the remainder of 2021 and 2022, though less so than projected in January. As a result, convergence to the inflation target is now expected to be somewhat faster than estimated in the previous report, assuming the absence of significant additional outbreaks of COVID-19. The technical staff’s year-end inflation projections for 2021 and 2022 have increased, suggesting figures around 3% due largely to variation in food and regulated items prices. The projection for inflation excluding food and regulated items also increased, but remains below 3%. Price relief measures on indirect taxes implemented in 2020 are expected to lapse in the second quarter of 2021, generating a one-off effect on prices and temporarily affecting inflation excluding food and regulated items. However, indexation to low levels of past inflation, weak demand, and ample excess productive capacity are expected to keep core inflation below the target, near 2.3% at the end of 2021 (previously 2.1%). The reversion in 2021 of the effects of some price relief measures on utility rates from 2020 should lead to an increase in the CPI for regulated items in the second half of this year. Annual price changes are now expected to be higher than estimated in the January report due to an increased expected path for fuel prices and unanticipated increases in regulated education prices. The projection for the CPI for foods has increased compared to the previous report, taking into account certain factors that were not anticipated in January (a less favorable agricultural cycle, increased pressure from international prices, and transport costs). Given the above, year-end annual inflation for 2021 and 2022 is now expected to be 3% and 2.8%, respectively, which would be above projections from January (2.3% and 2,7%). For its part, expected inflation based on analyst surveys suggests year-end inflation in 2021 and 2022 of 2.8% and 3.1%, respectively. There remains significant uncertainty surrounding the inflation forecasts included in this report due to several factors: 1) the evolution of the pandemic; 2) the difficulty in evaluating the size and persistence of excess productive capacity; 3) the timing and manner in which price relief measures will lapse; and 4) the future behavior of food prices. Projected 2021 growth in foreign demand (4.4% to 5.2%) and the supposed average oil price (USD 53 to USD 61 per Brent benchmark barrel) were both revised upward. An increase in long-term international interest rates has been reflected in a depreciation of the peso and could result in relatively tighter external financial conditions for emerging market economies, including Colombia. Average growth among Colombia’s trade partners was greater than expected in the fourth quarter of 2020. This, together with a sizable fiscal stimulus approved in the United States and the onset of a massive global vaccination campaign, largely explains the projected increase in foreign demand growth in 2021. The resilience of the goods market in the face of global crisis and an expected normalization in international trade are additional factors. These considerations and the expected continuation of a gradual reduction of mobility restrictions abroad suggest that Colombia’s trade partners could grow on average by 5.2% in 2021 and around 3.4% in 2022. The improved prospects for global economic growth have led to an increase in current and expected oil prices. Production interruptions due to a heavy winter, reduced inventories, and increased supply restrictions instituted by producing countries have also contributed to the increase. Meanwhile, market forecasts and recent Federal Reserve pronouncements suggest that the benchmark interest rate in the U.S. will remain stable for the next two years. Nevertheless, a significant increase in public spending in the country has fostered expectations for greater growth and inflation, as well as increased uncertainty over the moment in which a normalization of monetary policy might begin. This has been reflected in an increase in long-term interest rates. In this context, emerging market economies in the region, including Colombia, have registered increases in sovereign risk premiums and long-term domestic interest rates, and a depreciation of local currencies against the dollar. Recent outbreaks of COVID-19 in several of these economies; limits on vaccine supply and the slow pace of immunization campaigns in some countries; a significant increase in public debt; and tensions between the United States and China, among other factors, all add to a high level of uncertainty surrounding interest rate spreads, external financing conditions, and the future performance of risk premiums. The impact that this environment could have on the exchange rate and on domestic financing conditions represent risks to the macroeconomic and monetary policy forecasts. Domestic financial conditions continue to favor recovery in economic activity. The transmission of reductions to the policy interest rate on credit rates has been significant. The banking portfolio continues to recover amid circumstances that have affected both the supply and demand for loans, and in which some credit risks have materialized. Preferential and ordinary commercial interest rates have fallen to a similar degree as the benchmark interest rate. As is generally the case, this transmission has come at a slower pace for consumer credit rates, and has been further delayed in the case of mortgage rates. Commercial credit levels stabilized above pre-pandemic levels in March, following an increase resulting from significant liquidity requirements for businesses in the second quarter of 2020. The consumer credit portfolio continued to recover and has now surpassed February 2020 levels, though overall growth in the portfolio remains low. At the same time, portfolio projections and default indicators have increased, and credit establishment earnings have come down. Despite this, credit disbursements continue to recover and solvency indicators remain well above regulatory minimums. 1.2 Monetary policy decision In its meetings in March and April the BDBR left the benchmark interest rate unchanged at 1.75%.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography