Academic literature on the topic 'Multiple Single Input Change Vector (MSIC)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multiple Single Input Change Vector (MSIC).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Multiple Single Input Change Vector (MSIC)"

1

Zhang, Guohe, Ye Yuan, Feng Liang, Sufen Wei, and Cheng-Fu Yang. "Low Cost Test Pattern Generation in Scan-Based BIST Schemes." Electronics 8, no. 3 (2019): 314. http://dx.doi.org/10.3390/electronics8030314.

Full text
Abstract:
This paper proposes a low-cost test pattern generator for scan-based built-in self-test (BIST) schemes. Our method generates broadcast-based multiple single input change (BMSIC) vectors to fill more scan chains. The proposed algorithm, BMSIC-TPG, is based on our previous work multiple single-input change (MSIC)-TPG. The broadcast circuit expends MSIC vectors, so that the hardware overhead of the test pattern generation circuit is reduced. Simulation results with ISCAS’89 benchmarks and a comparison with the MSIC-TPG circuit show that the proposed BMSIC-TPG reduces the circuit hardware overhead about 50% with ensuring of low power consumption and high fault coverage.
APA, Harvard, Vancouver, ISO, and other styles
2

G., Sathesh Kumar*1 &. V. Saminadan2. "A NOVAL BISR APPROACH FOR EMBEDDED MEMORY SELF REPAIR." GLOBAL JOURNAL OF ENGINEERING SCIENCE AND RESEARCHES 5, no. 9 (2018): 267–74. https://doi.org/10.5281/zenodo.1441095.

Full text
Abstract:
As the density of embedded memory increases, manufacturing yields of integrated circuits can reach unacceptable limits. Normal memory testing operations require BIST to effectively deal with problems such as limited access and "at speed" testing. Built-in self-repair (BISR) techniques are widely used for the repair of embedded memories. One of the key components of a BISR circuit is the built-in redundancy-analysis (BIRA) module, which allocates redundancies according to the designed redundancy analysis algorithm. This project proposes a BIRA scheme for RAMs, which can provide the optimal repair rate using very low area cost and single test run of multiple single input change (MSIC) vectors in a pattern. Furthermore, the manifested errors are detected at the modules’ outputs using novel voting, while the latent faults are detected by comparing the internal states of the memory modules. Upon detection of any mismatch, the faulty modules are located and the state of a fault-free module is copied into the faulty modules.
APA, Harvard, Vancouver, ISO, and other styles
3

M., Nandini Priya, Vivitadurga R., and Priya U. "Design of Low Power TPG for BIST Using Reconfigurable Johnson Counter." Journal of VLSI Design and Signal Processing 5, no. 1 (2019): 6–15. https://doi.org/10.5281/zenodo.2532997.

Full text
Abstract:
Worked in Self-Test assumes an essential job in testing of VLSI circuits. Test designs created utilizing design generator is utilized to test the Circuit under Test. Regular technique for test design age includes in Reconfigurable Johnson Counter and LFSR which needs in relationship between's progressive test vectors. A Modern Low Power test design is created utilizing Reconfigurable Johnson Counter and Accumulator. A Low Power utilization gadget is basic for battery worked gadgets. The system for delivering the test vectors for BIST is coded utilizing VHDL and reproductions were performed with ModelSim 10.0b.
APA, Harvard, Vancouver, ISO, and other styles
4

Starke, Josefine, Bernhard Wehrle-Haller, and Peter Friedl. "Plasticity of the actin cytoskeleton in response to extracellular matrix nanostructure and dimensionality." Biochemical Society Transactions 42, no. 5 (2014): 1356–66. http://dx.doi.org/10.1042/bst20140139.

Full text
Abstract:
Mobile cells discriminate and adapt to mechanosensory input from extracellular matrix (ECM) topographies to undergo actin-based polarization, shape change and migration. We tested ‘cell-intrinsic’ and adaptive components of actin-based cell migration in response to widely used in vitro collagen-based substrates, including a continuous 2D surface, discontinuous fibril-based surfaces (2.5D) and fibril-based 3D geometries. Migrating B16F1 mouse melanoma cells expressing GFP–actin developed striking diversity and adaptation of cytoskeletal organization and migration efficacy in response to collagen organization. 2D geometry enabled keratinocyte-like cell spreading and lamellipod-driven motility, with barrier-free movement averaging the directional vectors from one or several leading edges. 3D fibrillar collagen imposed spindle-shaped polarity with a single cylindrical actin-rich leading edge and terminal filopod-like protrusions generating a single force vector. As a mixed phenotype, 2.5D environments prompted a broad but fractalized leading lamella, with multiple terminal filopod-like protrusions engaged with collagen fibrils to generate an average directional vector from multiple, often divergent, interactions. The migratory population reached >90% of the cells with high speeds for 2D, but only 10–30% of the cells and a 3-fold lower speed range for 2.5D and 3D substrates, suggesting substrate continuity as a major determinant of efficient induction and maintenance of migration. These findings implicate substrate geometry as an important input for plasticity and adaptation of the actin cytoskeleton to cope with varying ECM topography and highlight striking preference of moving cells for 2D continuous-shaped over more complex-shaped discontinuous 2.5 and 3D substrate geometries.
APA, Harvard, Vancouver, ISO, and other styles
5

Liang, Yawen, Shunli Wang, Yongcun Fan, Xueyi Hao, Donglei Liu, and Carlos Fernandez. "State of Health Prediction of Lithium-Ion Batteries Using Combined Machine Learning Model Based on Nonlinear Constraint Optimization." Journal of The Electrochemical Society 171, no. 1 (2024): 010508. http://dx.doi.org/10.1149/1945-7111/ad18e1.

Full text
Abstract:
Accurate State of Health (SOH) estimation of battery systems is critical to vehicle operation safety. However, it’s difficult to guarantee the performance of a single model due to the unstable quality of raw data obtained from lithium-ion battery aging and the complexity of operating conditions in actual vehicle operation. Therefore, this paper combines a long short-term memory (LSTM) network with strong temporality, and support vector regression (SVR) with nonlinear mapping and small sample learning. A novel LSTM-SVR combined model with strong input features, less computational burden and multiple advantage combinations is proposed for accurate and robust SOH estimation. The nonlinear constraint optimization is used to assign weights to individual models in terms of minimizing the sum of squared errors of the combined models, which can combine strengths while compensating for weaknesses. Furthermore, voltage, current and temperature change curves during the battery charging were analyzed, and indirect health features (IHFs) with a strong correlation with capacity decline were extracted as model inputs using correlation analysis and principal component analysis (PCA). The NASA dataset was used for validation, and the results show that the LSTM-SVR combined model has good SOH estimation performance, with MAE and RMSE all less than 0.75% and 0.97%.
APA, Harvard, Vancouver, ISO, and other styles
6

Shi, Haoqiang, Shaolin Hu, and Jiaxu Zhang. "LSTM based prediction algorithm and abnormal change detection for temperature in aerospace gyroscope shell." International Journal of Intelligent Computing and Cybernetics 12, no. 2 (2019): 274–91. http://dx.doi.org/10.1108/ijicc-11-2018-0152.

Full text
Abstract:
Purpose Abnormal changes in temperature directly affect the stability and reliability of a gyroscope. Predicting the temperature and detecting the abnormal change is great value for timely understanding of the working state of the gyroscope. Considering that the actual collected gyroscope shell temperature data have strong non-linearity and are accompanied by random noise pollution, the prediction accuracy and convergence speed of the traditional method need to be improved. The purpose of this paper is to use a predictive model with strong nonlinear mapping ability to predict the temperature of the gyroscope to improve the prediction accuracy and detect the abnormal change. Design/methodology/approach In this paper, an double hidden layer long-short term memory (LSTM) is presented to predict temperature data for the gyroscope (including single point and period prediction), and the evaluation index of the prediction effect is also proposed, and the prediction effects of shell temperature data are compared by BP network, support vector machine (SVM) and LSTM network. Using the estimated value detects the abnormal change of the gyroscope. Findings By combined simulation calculation with the gyroscope measured data, the effect of different network hyperparameters on shell temperature prediction of the gyroscope is analyzed, and the LSTM network can be used to predict the temperature (time series data). By comparing the performance indicators of different prediction methods, the accuracy of the shell temperature estimation by LSTM is better, which can meet the requirements of abnormal change detection. Quick and accurate diagnosis of different types of gyroscope faults (steps and drifts) can be achieved by setting reasonable data window lengths and thresholds. Practical implications The LSTM model is a deep neural network model with multiple non-linear mapping levels, and can abstract the input signal layer by layer and extract features to discover deeper underlying laws. The improved method has been used to solve the problem of strong non-linearity and random noise pollution in time series, and the estimated value can detect the abnormal change of the gyroscope. Originality/value In this paper, based on the LSTM network, an double hidden layer LSTM is presented to predict temperature data for the gyroscope (including single point and period prediction), and validate the effectiveness and feasibility of the algorithm by using shell temperature measurement data. The prediction effects of shell temperature data are compared by BP network, SVM and LSTM network. The LSTM network has the best prediction effect, and is used to predict the temperature of the gyroscope to improve the prediction accuracy and detect the abnormal change.
APA, Harvard, Vancouver, ISO, and other styles
7

Bagde, Vandana, and Dethe C. G. "Performance improvement of space diversity technique using space time block coding for time varying channels in wireless environment." International Journal of Intelligent Unmanned Systems 10, no. 2/3 (2020): 278–86. http://dx.doi.org/10.1108/ijius-04-2019-0026.

Full text
Abstract:
PurposeA recent innovative technology used in wireless communication is recognized as multiple input multiple output (MIMO) communication system and became popular for quicker data transmission speed. This technology is being examined and implemented for the latest broadband wireless connectivity networks. Though high-capacity wireless channel is identified, there is still requirement of better techniques to get increased data transmission speed with acceptable reliability. There are two types of systems comprising of multi-antennas placed at transmitting and receiving sides, of which first is diversity technique and another is spatial multiplexing method. By making use of these diversity techniques, the reliability of transmitting signal can be improved. The fundamental method of the diversity is to transform wireless channel such as Rayleigh fading into steady additive white Gaussian noise (AWGN) channel which is devoid of any disastrous fading of the signal. The maximum transmission speed that can be achieved by spatial multiplexing methods is nearly equal to channel capacity of MIMO. Conversely, for diversity methods, the maximum speed of broadcasting is much lower than channel capacity of MIMO. With the advent of space–time block coding (STBC) antenna diversity technique, higher-speed data transmission is achievable for spatially multiplexed multiple input multiple output (SM-MIMO) system. At the receiving end, detection of the signal is a complex task for system which exhibits SM-MIMO. Additionally, a link modification method is implemented to decide appropriate coding and modulation scheme such as space diversity technique STBC to use two-way radio resources efficiently. The proposed work attempts to improve detection of signal at receiving end by employing STBC diversity technique for linear detection methods such as zero forcing (ZF), minimum mean square error (MMSE), ordered successive interference cancellation (OSIC) and maximum likelihood detection (MLD). The performance of MLD has been found to be better than other detection techniques.Design/methodology/approachAlamouti's STBC uses two transmit antennas regardless of the number of receiver antennas. The encoding and decoding operation of STBC is shown in the earlier cited diagram. In the following matrix, the rows of each coding scheme represent a different time instant, while the columns represent the transmitted symbols through each different antenna. In this case, the first and second rows represent the transmission at the first and second time instant, respectively. At a time t, the symbol s1 and symbol s2 are transmitted from antenna 1 and antenna 2, respectively. Assuming that each symbol has duration T, then at time t + T, the symbols –s2* and s1*, where (.)* denotes the complex conjugate, are transmitted from antenna 1 and antenna 2, respectively. Case of one receiver antenna: The reception and decoding of the signal depend on the number of receiver antennas available. For the case of one receiver antenna, the received signals are received at antenna 1 , hij is the channel transfer function from the jth transmit antenna and the ith receiver antenna, n1 is a complex random variable representing noise at antenna 1 and x (k) denotes x at time instant k ( at time t + (k – 1)T.FindingsThe results obtained for maximal ratio combining (MRC) with 1 × 4 scheme show that the BER curve drops to 10–4 for signal-to-noise (SNR) ratio of 10 dB, whereas for MRC 1 × 2 scheme, the BER drops down to 10–5 for SNR of 20 dB. Results obtained in Table 1 show that when STBC is employed for MRC with 1 × 2 scheme (one antenna at transmitter node and two antennas at receiver node), BER curve comes down to 0.0076 for Eb/N0 of 12. Similarly, when MRC with 1 × 4 antenna scheme is implemented, BER drops down to 0 for Eb/N0 of 12. Thus, it can be concluded from the obtained graph that the performance of MRC with STBC gives improved results. When STBC technique is used with 3 × 4 scheme, at SNR of 10 dB, BER comes nearer to 10–6 (figure 7.3). It can be concluded from the analytics observed between AWGN and Rayleigh fading channel that for AWGN channel, BER is found to be equal to 0 for SNR value of 13.5 dB, whereas for Rayleigh fading channel, BER is observed nearer to 10–3 for Eb/N0 = 15. Simulation results (in figure 7.2) from the analytics show BER drops to 0 for SNR value of 12 dB.Research limitations/implicationsOptimal design and successful deployment of high-performance wireless networks present a number of technical challenges. These include regulatory limits on useable radio-frequency spectrum and a complex time-varying propagation environment affected by fading and multipath. The effect of multipath fading in wireless systems can be reduced by using antenna diversity. Previous studies show the performance of transmit diversity with narrowband signals using linear equalization, decision feedback equalization, maximum likelihood sequence estimation (MLSE) and spread spectrum signals using a RAKE receiver. The available IC techniques compatible with STBC schemes at transmission require multiple antennas at the receiver. However, if this not a strong constraint at the base station level, it remains a challenge at the handset level due to cost and size limitation. For this reason, SAIC technique, alternative to complex ML multiuser demodulation technique, is still of interest for 4G wireless networks using the MIMO technology and STBC in particular. In a system with characteristics similar to the North American Digital mobile radio standard IS-54 (24.3 K symbols per sec. with an 81 Hz fading rate), adaptive retransmission with time deviation is not practical.Practical implicationsThe evaluation of performance in terms of bit error rate and convergence time which estimates that MLD technique outperforms in terms of received SNR and low decoding complexity. MLD technique performs well but when higher number of antennas are used, it requires more computational time and thereby resulting in increased hardware complexity. When MRC scheme is implemented for singe input single output (SISO) system, BER drops down to 10–2 for SNR of 20 dB. Therefore, when MIMO systems are employed for MRC scheme, improved results based on BER versus SNR are obtained and are used for detecting the signal; comparative study based on different techniques is done. Initially ZF detection method is utilized which was then modified to ZF with successive interference cancellation (ZFSIC). When successive interference cancellation scheme is employed for ZFSIC, better performance is observed as compared to the estimation of ML and MMSE. For 2 × 2 scheme with QPSK modulation method, ZFSIC requires more computational time as compared to ZF, MMSE and ML technique. From the obtained results, the conclusion is that ZFSIC gives the improved results as compared to ZF in terms of BER ratio. ZF-based decision statistics can be produced by the detection algorithm for a desired sub-stream from the received vector whichs consist of an interference which occurred from previous transmitted sub-streams. Consequently, a decision on the secondary stream is made and contribution of the noise is regenerated and subtracted from the vector received. With no involvement of interference cancellation, system performance gets reduced but computational cost is saved. While using cancellation, as H is deflated, coefficients of MMSE are recalculated at each iteration. When cancellation is not involved, the computation of MMSE coefficients is done only once, because of H remaining unchanged. For MMSE 4 × 4 BPSK scheme, bit error rate of 10–2 at 30 dB is observed. In general, the most thorough procedure of the detection algorithm is the computation of the MMSE coefficients. Complexity arises in the calculation of the MMSE coefficients, when the antennas at the transmitting side are increased. However, while implementing adaptive MMSE receivers on slow channel fading, it is probable to recover the signal with the complications being linear in the antennas of transmitter node. The performance of MMSE and successive interference cancellation of MMSE are observed for 2 × 2 and 4 × 4 BPSK and QPSK modulation schemes. The drawback of MMSE SIC scheme is that the first detected signal observes the noise interference from (NT-1) signals, while signals processed from every antenna later observe less noisy interference as the process of cancellation progresses. This difficulty could be overcome by using OSIC detection method which uses successive ordering of the processed layers in the decreasing power of the signal or by power allocation to the signal transmitted depending on the order of the processing. By using successive scheme, a computation of NT delay stages is desired to bring out the abandoned process. The work also includes comparison of BER with various modulation schemes and number of antennas involved while evaluating the performance. MLD determines the Euclidean distance among the vector signal received and result of all probable transmitted vector signals with the specified channel H and finds the one with the minimum distance. Estimated results show that higher order of the diversity is observed by employing more antennas at both the receiving and transmitting ends. MLD with 8 × 8 binary phase shift keying (BPSK) scheme offers bit error rate near to 10–4 for SNR (16 dB). By using Altamonti space ti.Social implicationsIt should come as no surprise that companies everywhere are pushing to get products to market faster. Missing a market window or a design cycle can be a major setback in a competitive environment. It should be equally clear that this pressure is coming at the same time that companies are pushing towards “leaner” organizations that can do more with less. The trends mentioned earlier are not well supported by current test and measurement equipment, given this increasingly high-pressure design environment: in order to measure signals across multiple domains, multiple pieces of measurement equipment are needed, increasing capital or rental expenses. The methods available for making cross-domain, time-correlated measurements are inefficient, reducing engineering efficiency. When only used on occasion, the learning curve to understand how to use equipment for logic analysis, time domain and RF spectrum measurements often requires an operator to re-learn each piece of separate equipment. The equipment needed to measure wide bandwidth, time-varying spectral signals is expensive, again increasing capital or rental expenses. What is needed is a measurement instrument with a common user interface that integrates multiple measurement capabilities into a single cost-effective tool that can efficiently measure signals in the current wide-bandwidth, time-correlated, cross-domain environments. The market of wireless communication using STBCs has large scope of expansion in India. Therefore, the proposed work has techno-commercial potential and the product can be patented. This project shall in turn be helpful for remote areas of the nearby region particularly in Gadchiroli district and Melghat Tiger reserve project of Amravati district, Nagjira and so on where electricity is not available and there is an all the time problem of coverage in getting the network. In some regions where electricity is available, the shortage is such that they cannot use it for peak hours. In such cases, stand-alone space diversity technique, STBC shall help them to meet their requirements in making connection during coverage problem, thereby giving higher data transmission rates with better QOS (quality of service) with least dropped connections. This trend towards wireless everywhere is causing a profound change in the responsibilities of embedded designers as they struggle to incorporate unfamiliar RF technology into their designs. Embedded designers frequently find themselves needing to solve problems without the proper equipment needed to perform the tasks.Originality/valueWork is original.
APA, Harvard, Vancouver, ISO, and other styles
8

A, Ameri. "Recent Advances in EMG Pattern Recognition for Prosthetic Control." Journal of Biomedical Physics and Engineering 10, no. 2 (2020). http://dx.doi.org/10.31661/jbpe.v0i0.2002-1076.

Full text
Abstract:
Limb loss results in significant debilitation and reduces the quality of life of the affected individuals [ 1 ]. To restore the lost limb’s function, myoelectric systems have been widely used in powered prostheses [ 2 ]. With this approach, the motor intent is estimated from the electromyogram (EMG) signals recorded by electrodes which are placed on the skin surface above the residual muscles [ 1 ]. The principle of commercial myoelectric schemes has not changed in several decades, and is referred to as conventional control [ 2 ]. This technique uses a measure of amplitude (such as mean absolute value over a time window) of the EMG signals recorded by electrodes placed at two control sites, preferably over a pair of antagonist muscles of the residual limb, to control a single motion i.e. degree of freedom (DoF), for example hand opening closing [ 2 ]. To change the DoF, a mode switch is conducted by muscle co-contraction or a hardware switch [ 2 ]. The mode switch, however, results in an unnatural control of multiple DoFs [ 2 ]. To overcome this challenge, a significant body of research has been conducted on pattern recognition techniques [ 3 ]. With this approach, a classifier is trained to discriminate between different DoFs, using patterns from multi-channel EMG input data. Promising results have been achieved in the literature for classification of several DoFs [ 2 ]. Since activities of daily living include simultaneous movements of multiple DoFs, combined motions must be also included as separate classes, and they have to be conducted in the training set [ 4 ]. The limitation of this approach, however, is that it does not allow the DoFs in combined motions to have different magnitudes. As a solution to this problem, regression-based systems have been proposed [ 5 , 6 ], where a regressor is trained to estimate each DoF, using data from single and combined motions. This strategy provides independent simultaneous control, because it does not limit the DoFs to have the same amplitude. Classification and regression based systems are the two categories of pattern recognition methods. Due to the high dimensionality of EMG signals, the EMG instantaneous values are not directly used as the inputs to classifiers/regressors [ 1 ]. Instead, a set of features is extracted from a time window (100-200 ms) of EMG signals [ 7 ]. Feature engineering is the process of design and extraction of features with the highest amount of useful information to maximize the classification/regression accuracy [ 8 ] Among various EMG features proposed in the literature, the Time Domain (TD) set [ 9 ] is the most popular set and includes mean absolute value, waveform length, zero-crossings, and slope sign changes. The past few years have seen the advent of deep learning-based myoelectric control [ 4 , 10 ]. Deep learning can perform classification/regression tasks directly from high-dimensional raw data, without feature engineering [ 8 ]. Convolutional neural network (CNN) [ 11 ] is one of the most widely used deep learning frameworks. The successive convolution layers of CNNs can learn useful features from the EMG data to estimate the motor intent [ 4 ]. As the outcomes of the previous studies [ 4 , 10 ] confirm, CNNs outperform classical models such as support vector machines (SVMs) with engineered feature sets. EMG pattern recognition schemes have yet to be deployed in commercial prostheses. The major challenge is performance degradation due to disturbances such as electrode shift, skin impedance change, muscle size variations, and learning effect [ 2 ]. Recent studies (e.g. [ 12 , 13 ]) have proposed methods to improve the robustness of EMG pattern recognition to such disturbances. These methods as well as new deep learning schemes that eliminate feature engineering, may pave the way for commercial implementation of myoelectric pattern recognition prostheses. Moreover, independent simultaneous control can be achieved by using regression deep learning models. These promising methods have the potential to significantly outperform existing commercial systems. Consequently, the missing functions in people with limb loss can be restored more efficiently by delivering a more natural and intuitive control.
APA, Harvard, Vancouver, ISO, and other styles
9

sprotocols. "Imaging and Analysis of OT1 T Cell Activation on Lipid Bilayers." January 7, 2015. https://doi.org/10.5281/zenodo.13784.

Full text
Abstract:
Authors: Peter Beemiller, Jordan Jacobelli & Matthew Krummel ### Abstract Supported lipid bilayers are frequently used to study cell membrane protein dynamics during immune synapse formation by T cells. Here we describe methods for the imaging and analysis of OT1+ T cell activation and T-cell receptor (TCR) dynamics on lipid bilayers. ### Introduction T cells are activated at immune synapses when TCRs bind agonist ligands on antigen presenting cells (APCs). Glass coverslip–supported lipid bilayers provide a system for in vitro T cell activation and immune synapse formation. In these systems, the supported bilayer acts as a surrogate APC, presenting all the factors needed to trigger TCR signaling and synapse formation. In a minimal activation system, only pMHC and ICAM are incorporated to activate TCRs. Lipid bilayers provide a number of technical advantages over authentic APCs. Coverslip–spanning bilayers can be formed, allowing large numbers of T cells to be deposited and analyzed in parallel. Unlike synapse formation on an authentic APC, the support restricts molecular reorganizations in the synapse membrane to a Euclidean plane. Using a total internal reflection fluorescence (TIRF) microscope restricts the imaging field to within ~100 nm of the synaptic interface. However, care should be taken when extrapolating synapse characteristics seen on bilayers to physiological synapses. In vivo, T cells generate synapses with irregular geometries, as they continuously crawl over APCs and potentially encounter other T cells. The lipid bilayer system consists of a polyethylene glycol-cushioned lipid bilayer bearing Ni-NTA and biotin modified phospholipids. The PEG5,000 cushion is formed by the inclusion of a small fraction of phospholipids with PEG5,000 polymers covalently attached to the head groups (1,2), which improves the bilayer uniformity and streptavidin mobility. The Ni-NTA– and biotin–modified lipid head groups are used to capture dodecahistidine–sICAM-1 (3) and tetravalent streptavidin, respectively. Captured streptavidin is then used to bind monobiotinylated SIINFEKL:H2-K(b) pMHC complexes, resulting in stimulating bilayers that can activate OT1 TCR signaling (Fig. 1). The bilayers can be standardized by creating silica microsphere supported bilayers and comparing the protein ligand levels to the levels displayed on antigen presenting cells (4). Alternatively, the microsphere bilayer standards can be compared to reference beads to estimate the density in number of molecules per unit area. We standardize stimulating bilayers using bone marrow derived dendritic cells (BMDCs) pulsed with SIINFEKL peptide at a concentration that produces maximum in vitro T cell proliferation as a reference APC. ### Reagents 1. Mice - CD8+ OT1+ TCR-transgenic mice, which recognize the SIINFEKL peptide of ovalbumin bound to H-2K(b) (5), can be obtained from Taconic, and then bred in-house. The floxed MyH9 mice, described previously (6), are crossed with OT1+ mice. - Growth Media - Phoenix cells are maintained in DMEM supplemented with 10% fetal bovine serum, 100 U/ml penicillin, 0.1 mg/ml streptomycin, 2 mM L-glutamine, 10 mM HEPES and 50 μM β-mercaptoethanol. T cells are maintained in complete RPMI: RPMI supplemented with 10% fetal bovine serum, 100 U/ml penicillin, 0.1 mg/ml streptomycin, 2 mM L-glutamine, 10 mM HEPES and 50 µM β-mercaptoethanol. - Chemicals and Labeling Reagents - Reconstitute reagents in DMSO or methanol, where indicated, and store at -20 C: SIINFEKL peptide (Anaspec 60193), 10 mg/ml; Jasplakinolide (EMD Biosciences 420107), 1 mM; Blebbistatin (EMD Biosciences 203390), a 100 mM racemic mixture (use at a final concentration of 50 μM active enantiomer); Fura-2 AM (Invitrogen F1221), 1 mM; Alexa Fluor 488-phalloidin (Invitrogen A12379), 300 U/ml in methanol; CellTracker Orange (Invitrogen C2927) and CFSE (Invitrogen C1157), 10 mM in DMSO. Vybrant DiO cell-labeling solution (Invitrogen V-22886) is stored at 22 C. - Antibodies and Recombinant Proteins - H57-597 anti-TCRΒ (Bio-X-Cell BE0102) conjugated to Alexa Fluor 568; YN1/1.7.4 anti-ICAM (UCSF hybridoma facility) conjugated to Alexa Fluor 488; SIINFEKL-H2-Kb-specific antibody 25-D1.16 (eBioscience, 12-5743); Biotinylated H-2K(b) loaded with SIINFEKL (Beckman Coulter or obtained from the NIH Tetramer Facility); the dodecahistidine-tagged extracellular domain of ICAM1 (his-ICAM) is purified from the supernatant of High Five cells transfected using a baculovirus expression system (3). The protein is purified using nickel-affinity resin, followed by MonoQ, then Superdex FPLC. Fractions with monomeric his-ICAM should be collected, mixed with an equal volume of glycerol, and stored at -20 C until use. His-ICAM was fluorescently labeled for FRAP using Alexa Fluor 488 succinimidyl ester. - Phospholipids - Phospholipid stocks (Avanti Polar Lipids) should be purchased as chloroform stocks and stored under nitrogen at -20 C. The bilayer component lipids are 16:0-18:1 PC (850457C), 18:1 DGS-NTA(Ni) (790404C), 18:1 Biotinyl Cap PE (870273C), 18:1 DOPE-PEG5000 (880230C). ### Equipment 1. Microscope - TIRF microscopes are available from a number of microscope manufacturers. The TIRF microscope used here was used in two configurations. In the first configuration, a Zeiss Axiovert 200M, with a Laser TIRF I system and a 1.45 NA, 100× Plan-Fluar objective, was used (7). A 50 mW 491 nm laser and a 25 mW 561 nm DPSS laser (Cobolt SE) were fiber–coupled to the Laser TIRF I slider for TIRF illumination. A Stanford Photonics XR/MEGA-10Z iCCD camera was used to collect TIRF images. In this configuration, QED InVivo (Media Cybernetics) was used to coordinate illumination settings and control image acquisition. In an updated configuration, an Applied Scientific Instrumentation MS-2000 automated stage, a Photometrics Evolve emCCD in place of the iCCD camera, and an improved Zeiss 1.46 NA, 100× Plan-Apochromat were added to the TIRF microscope. In this configuration, Metamorph (Universal Imaging) was used to coordinate hardware and collect images from the emCCD. An intermediate lens in the Axiovert 200M allows the image sampling resolution to be increased (0.16 to 0.1 μm using the emCCD). For two-color TIRF imaging, a Photometrics DV2, two-channel simultaneous imaging system with a 560 nm long pass dichroic filter and 525/50 nm and 605/70 nm bandpass emission filters was used to split the emCCD camera field into two image channels for GFP and Alexa Fluor 568 imaging. A passive splitter allows faster acquisition than wavelength selection using a filter-wheel, at the cost of half of the camera field. To image calcium fluxes, we use the updated TIRF microscope in standard epifluorescent mode, employing a DG-4 (Sutter Instruments) with 340x and 380x excitation bandpass filters (Chroma Technology) and a Zeiss 1.3 NA, 40× PlanFluar objective. ### Procedure *OT-I T cell blast preparation* 1. Prepare cell cell suspensions in complete RPMI from the spleen and lymph nodes of OT-I transgenic mice. - Adjust splenocytes to ~10e7 cells/ml, then incubate at 37 C for 30 min with 0.1 µg/ml SIINFEKL peptide. Rinse three times. - Mix splenocytes and lymph node cells 1:1 to stimulate T cells. If T cells will be retrovirally transduced, plate 1 mL of cells per well of a 24-well plate. Otherwise, cells can be cultured in a T75 flask. Supplement with fresh media plus IL-2 daily starting two days after stimulation. *Retroviral transductions* 1. On the day T cells are stimulated, transfect Phoenix cells with 5 μg pCL-Eco helper virus plasmid and 15 μg of the retroviral vector. The next day, change the media to fresh complete DMEM. - Retroviral infection is performed 48 and 72 hours after stimulating the T cells. Mix supernatants from transfected Phoenix cells with IL-2 and polybrene. Add 1 ml of supernatant to each well with T cells and spin at room temperature for 1 hour. Return to the incubator. In the afternoon after the second spin infection, transfer cells from the plate to fresh media in a T75. *Conditional myosin II knockout* 1. Coat two 24-well plates with anti-CD3 (clone 2C11) at 2 μg/ml in PBS using 0.5 ml/well. - Make a mix of 3.4×10e7 lymph node cells and 6.6×10^7 splenocytes in 50ml of complete RPMI and add anti-CD28 (clone PV-1) at 2 μg/ml. - Aspirate off the anti-CD3 coating solution from the two 24 well plates and then plate 1ml/well of your cell solution. - 48 hours later (day 2) transfer the cells to fresh 24 well plates and spin-infect with the viral supernatants from Phoenix cells transfected with pMIG (GFP) or pMIG-Cre (Cre-GFP fusion). One plate gets the GFP virus and the other the Cre-GFP virus. Leave one well as a non-infected control for setting up the sort. - 24 hours later (day 3) transfer the cells from the plates to T150 flasks (one for each group) and add ~50 ml of R10 with 10U IL2/ml (to bring to a total of ~100 ml). - 24 hours later (day 4) prepare the cells for sorting. Filter through a 40um strainer and resuspend in 5–6 ml. Also prepare 2–3 15ml collection tubes with 2 ml FCS and 2 ml R10. After the sort, plate the cells at 2×10^6/ml in R10 with 10U IL2/ml and let the cells rest overnight. - The sorted T cell blasts can be used between day 5 and 6. You’ll probably need to ficoll the cells the day after sorting to remove the dead cells and debris. Day 6 generally has better depletion than day 5, but also more cell mortality. Ideally use the cells late day 5 or early day 6. The efficiency of depletion should be routinely tested by either intracellular FACS stain or western blot. *Preparation of cells for imaging* 1. On the day of imaging, collect live T cells onto a Histopaque cushion. Wash and resuspend in complete RPMI without phenol red indicator. - For calcium imaging, load cells with 1 μM Fura-2 AM in PBS for 20 min at 22 C before transferring to complete RPMI without indicator. To label surface TCRs, resuspend 2×10e6 cells in ~0.1 ml complete RPMI without indicator and 1 μg Alexa Fluor 568-labeled H57-597 anti-TCRβ. After 30 min on ice, wash cells with complete RPMI and hold until imaging. *Inhibitor studies* 1. To inhibit actin depolymerization, add jasplakinolide to cells on ice. After 15 min, transfer cells to a pre-warmed bilayer well containing jasplakinolide at the same concentration used to treat the cells. Because of variability in jasplakinolide activity from different lots, an appropriate treatment (concentration and incubation duration) should be determined. This is critical, as high concentrations of jasplakinolide and extended incubations with the drug can induce a polymerization defect, in addition to the expected depolymerization defect (9). For control runs, add DMSO vehicle to the cells and treat identically to the inhibitor–treated cells. - To inhibit myosin II activity, add blebbistatin to cells for 30 min before addition to a well preloaded with blebbistatin. Blebbistatin–containing media should be kept in the dark, and cells treated with blebbistatin should not be illuminated with wavelengths of light below 540 nm, to avoid light-induced protein-crosslinking by blebbistatin (10,11). Control cells are treated with DMSO vehicle in an identical manner to drug challenged cells. *Liposomes and Bilayers* Liposome preparation: 1. To prepare liposomes, mix phospholipids (96.5% PC, 2% DGS-NTA(Ni), 1% Biotinyl-Cap-PE and 0.5% PEG5,000-PE) in a round bottom flask. First dry the mixture under a stream of nitrogen, then overnight under vacuum. - The following day, rehydrate the lipid cakes with PBS to a total phospholipid concentration of 4 mM. Allow the liposomes to hydrate for 1 hour at room temperature, mixing occasionally by swirling the flask. - The crude liposome preparations should be subjected to five freeze-thaw cycles using liquid nitrogen to generate a crude liposome preparation. - To prepare small unilamellar vesicles, extrude the crude mixture through a 100 nm pore-size polycarbonate filter (Whatman 8000309) using an Avestin LiposoFast mini-extruder. Pass the liposome mixture through the extruder for 10–20 cycles. - Store the liposomes at 4 C between uses. Do not freeze them. Liposomes are good for 1 week. Glass preparation: Glass can be cleaned in advance, dried and stored until use. 1. Clean LabTek II chambered coverglasses with 10 M NaOH for 10 min, then 1 M HCl in 70% ethanol for 10 min. - Rinse chambers thoroughly with 18 MΩ water. Lipid bilayer setup: 1. Dilute liposomes ten-fold (0.4 mM final) with PBS, and apply 0.25 ml of the liposome mixture to each well of a clean chamber. After 30 min, rinse excess liposomes away by repeatedly filling each well with PBS and removing all but 0.25 ml of the overlay. Repeat this rinsing procedure until each well had been washed with ~12 ml of PBS. - Block bilayers for 30 min by adding an equal volume of 2% bovine serum albumin in PBS (PBS-BSA). - Load streptavidin at 5 µg/ml for 30 min in the PBS-BSA, and then wash away the excess streptavidin. - Dilute his-ICAM and biotinylated pMHC from working stocks into PBS-BSA, and then add to bilayers to achieve the desired final locading concentration. Loading concentrations of 2.5×10e2 fg/ml to 2.5×10e7 fg/ml biotinylated pMHC or 62.5–500 ng/ml his-ICAM are routinely used in our lab (Fig. 1). - After loading proteins for 30 min at 22 C, rinse the bilayers and finally warmed before adding cells. Streptavidin is loaded in excess relative to pMHC molecules (≥400:1 streptavidin:pMHC) to minimize the formation of multivalent pMHC-streptavidin complexes. Working stocks of his-ICAM and biotinylated pMHC monomer at 25 µg/ml in PBS-BSA should be prepared weekly from frozen stocks and stored at 4 C between uses. Analysis of protein motility: 1. To assess the uniformity of the lipid bilayers and the motility of proteins ligated to the bilayers, setup bilayers, and then load with TRITC conjugated streptavidin or Alexa Fluor 488-his-ICAM in place of the non-fluorescent species. Bilayer setup should be otherwise identical to the setup for bilayers used for T cell synapse imaging. - Bilayers should be scanned in the microscope to qualitatively assess uniformity. To quantify protein mobility, the lipid bilayers are analyzed by fluorescence recovery after photobleaching (FRAP). We use either a C1si confocal microscope in non-spectral mode (Nikon Instruments) or using a Mosaic Targeted Illumination system (Photonic Instruments) attached to the TIRF microscope. Random regions are selected on the bilayer, photobleached, and time-lapse, widefield images of the bilayer acquired post-bleach to quantify recovery. The mobile fraction of his-ICAM is typically 90‒99%, while the streptavidin mobile fraction is typically 93‒99%. Standardization: To generate standardized bilayers, measure the loading of his-ICAM and biotinylated pMHC onto bilayers relative to bone marrow derived dendritic cells (BMDCs), a prototypical antigen presenting cell (Fig. 1). BMDCs are loaded with 100 ng/ml SIINFEKL peptide in complete RPMI at 37 C for 30 min, and then rinsed thoroughly. 1. Generate bilayer standards: setup lipid bilayers on 5 μm diameter silica microspheres (Bangs Labs, Fishers, IN) using the same procedure used for coverslip supported bilayers. - Load the bilayer standards with pMHC or ICAM at various concentrations and wash. Stain the microsphere bilayer standards and BMDCs for ICAM and SIINFEKL:H-2K(b) using YN1/1.7.4 and 25-D1.16, an antibody specific for the SIINFEKL:H-2K(b) complex, respectively. Flow cytometric analysis of the BMDCs and microspheres is performed on any suitable flow cytometer. We use a BD Biosciences FACSCalibur or Accuri C6. *Microscopy* 1. To image live OT1+ T cells interacting with the bilayers, add 10e5 cells in 0.1 ml of complete RPMI without Phenol Red indicator to the 0.5 ml of PBS overlaying the bilayer. Collect images as needed: - For Fura ratiometric image time-lapse sequences, start acquisition as soon as the first cells tether to the bilayer (typically within 1 minute of addition of cells). Fura-2 component images, consisting of 340/10 nm or 380/10 nm excitation with emission recorded at 520/20 nm, should be collected with 33‒66 ms exposures at 15 s intervals for 20 min. - For TIRF microscopy time-lapses, locate cells undergoing initial spreading onto bilayers and acquire TIRF images at 1 or 2 s intervals using 33‒100 ms exposure lengths for 3‒5 min. Cells can be imaged until all cells are bound to the bilayers (typically 10‒15 min) or, when imaging jasplakinolide treated cells, for 5 min after delivering cells into wells. - For imaging of fixed samples, prelabel cells with Alexa Fluor 568-H57-597, and then allow cells to interact with pre-warmed bilayers for 15 min. Fix cells with 1–2% paraformaldehyde on the bench. For Alexa Fluor 488-phalloidin staining, permeabilize cells with 0.1% Triton X-100 for 5 min before staining. Volumetric microscopy stacks of synapses can be acquired using a spinning disk confocal microscope with a 1.4 NA, 100x PlanApo objective (Nikon). ### Timing - Liposome preparation: 1.5 hours - Bilayer preparation: 2+ hours - Generation of retrovirally transduced T cells: 5 days - Preparation of T cell blasts for imaging: 1 hour - Imaging: 2–4 hours. ### Troubleshooting *Bilayer uniformity*: The lipid bilayers should be uniform over many mm2, but occasional discontinuities are expected. If the discontinuities are frequent, this might indicate an issue with the cleanliness of the glass support, or contamination in the the liposome preparation. To test the quality of the naked bilayers, either incorporate a small amount of fluorescently labeled phospolipids into your liposome preparations (e.g., 0.5% Oregon Green 488 DHPE, Invitrogen O-12650), or pre-mix your liposomes with DiO before applying to the glass. *Ligand immobility*: Protein ligand immobility is a common issue. You should first ensure that the bilayers are setting up as uniform, continuous sheets (above). In general, it is also best to use the minimum amount of ligand-binding phospholipids (DGS-NTA(Ni) and Biotinyl-CAP-PE) required to achieve sufficient protein loading. ### Anticipated Results **Image Analysis** The image analysis routines are performed almost entirely using MATLAB (MathWorks) scripts. The scripts for these analyses can be found as attachments, organized by application (tracking, segmentation, etc.). The functions performed by the scripts are described in general below. *Image arithmetic and cell and TCR microcluster tracking* All image arithmetic operations, for example: filtering, background subtraction, masking, and division, are performed in MATLAB. Cell tracking for analysis of calcium and cell motility is performed in Imaris using fura-2 ratiometric images series calculated and masked in MATALB. To create the fura-2 ratiometric image series, the component images acquired with 340 nm and 380 nm excitation are converted to floating point and the images acquired using 340 nm excitation are divided by the images acquired using 380 nm excitation. Image masks are created using Otsu’s algorithm on the 380 nm component images. Small non-cell debris is removed from the masks, and then the masked ratiometric images are transferred from MATLAB to Imaris for tracking. After tracking, ratiometric intensities for each cell track are normalized to the ratiometric intensity before cell binding to the bilayer. Cell track displacements and normalized ratios are then aligned to the onset of bilayer binding, which typically corresponds to the initiation of calcium fluxes for cells on stimulating bilayers. To calculate synapse parameters, such as mean speed, ratiometric intensities versus distances from the origin, characterization of synapses as high motility, etc., cell intensities and positions are transferred from Imaris to Excel files. The data is then imported from the Excel worksheets into MATLAB for calculation of synapse parameters. TCR microcluster identification is performed using the polynomial fitting with Gaussian weight method (13). Assignment of identified microclusters to tracks is performed in Imaris (Andor) by transferring the microcluster data through the ImarisXT MATLAB interface. Where necessary, broken microcluster tracks are manually linked to generate completed microcluster tracks. All further track manipulations, such categorization of tracks based on their time of formation, or calculation of movement vectors, are performed after transferring the assembled tracks to data structures in MATLAB. *Conversion of fura-2 ratiometric intensities to calcium values* To calculate the relative amount of elevated calcium signal detected versus the distance that the cell had displaced from its binding site on the bilayer, the fura-2 ratiometric intensity time series data was divided by the sum of the above-baseline ratiometric intensities at all the time points. This converted the ratiometric intensities to values representing the fraction of all calcium flux detected. The values were then graphed versus to the displacement of the cell at the time of the ratiometric intensity measurement, binning the displacement values into 1 μm intervals. *Segmentation of synapses and cSMACs* To define and measure synapse footprints, TCR TIRF image sequences are filtered with a 1‒2 pixel standard deviation Gaussian filter as needed, and then masked with an intensity threshold that coarsely segments the synapse footprint from the background. The appropriate threshold is automatically selected using a minimum cross entropy threshold algorithm, which typically identifies a threshold that represents the full synapse, rather than the bright central region of TCRs. However, all automated segmentation routines should be manually verified for accuracy. In cases where the algorithm fails to identify an appropriate threshold for the synapse, a threshold can be manually selected. Morphological closing, hole-filling and removal of small, unconnected objects are then sequentially performed on each image in the series to yield masks with a single, contiguous region representing the cell footprint over time. In cases where Lifeact-GFP TIRF images are acquired, the GFP image is used to generate synapse masks. To define and measure cSMACs, a threshold is applied to segment the bright, interior accumulations of TCRs (SMACs) from the dimmer peripheral microclusters. This intensity threshold was manually selected for each cell to accurately reflect the borders of the bright SMACs. This intensity threshold is then applied to all images in the time series to create a preliminary mask of the cSMAC. Occasional peripheral signaling microclusters with above-threshold intensities are then eliminated from the cSMAC mask with a 1 μm2 size filter. Morphological closing and hole-filling of the individual SMACs are then applied to generate the cSMAC mask. To account for loosely collected SMACs, rather than generating a single region, the cSMAC is allowed to be represented by multiple SMAC regions. Therefore, to measure the centroid of cSMACs, the area-weighted centroid of all SMAC regions is calculated. *Calculation of TCR microcluster radial displacement and centralization values* Instantaneous TCR microcluster radial displacements are calculated as the dot product of the microcluster movement vector and the vector from the microcluster base position to the center of the cSMAC. This converts the two-dimensional (xy) movements of the microclusters to one-dimensional (radial) values. Calculating the dot product using the vector from the microcluster to the cSMAC establishes the direction to the cSMAC as the positive flow direction. Microcluster radial displacements are calculated for each movement vector in the microcluster track and then cumulatively summed to generate radial displacement series, which represent the radial displacement of microclusters from their initial position. In these graphs, a microcluster is moving away from the cSMAC as the displacement decreases and moving towards the cSMAC as the displacement increases. To calculate instantaneous edge flow values, at each point in the microcluster track, a line from the center of the cSMAC through the microcluster position and to the synapse edge is constructed. The edge of the synapse is determined from the synapse masks, and the intersection of the edge with the line from the cSMAC through the microcluster position is calculated. This calculation is performed at each position in the microcluster track to create an edge intersection track. Instantaneous edge movement vectors are calculated from these intersection tracks, and edge cumulative radial displacement series are generated as for microclusters. To measure microcluster centralization while accounting for outward movement during spreading, the centralization value of a microcluster are calculated as the difference between: 1) the distance from the microcluster to the cSMAC when the microcluster reached its greatest separation from the cSMAC and 2) the distance from the microcluster to the cSMAC after it centralized. Therefore, the centralization measures the distances microclusters travelled inward from the point at which inward movement began. Imaris is used to generate speeds and straightness factors for the TCR microcluster tracks. These values are then transferred to MATLAB, which is used to calculate microcluster track mean speeds and mean straightness factors. *Calculation of synapse areas relative to cell volumes* To quantify synapse sizes relative to cell volumes, OT1+ T cell blasts are labeled with CFSE and Alexa Fluor 568-H57-597 prior to introduction to stimulating bilayers. The cells are fixed with 1% paraformaldehyde, and then imaged by spinning disk confocal microscopy to collect images of the cytoplasmic volume and TCRs at the cell-bilayer interface. The volumes of the cells were estimated by creating isosurfaces in Imaris using the z-series images of the CFSE-marked cell volume. Synapse areas are measured at the synapse image plane by manually applying a threshold to mask the cell. The equivalent radii from both the volumes and areas are then calculated. The equivalent radius calculated from the cell volume is then taken as the ‘expected’ synapse radius—the radius that would be achieved if a cell with the calculated volume spread so that its synapse radius matched the equivalent radius. This volume-derived radius is subtracted from the equivalent radius calculated from the synapse area to calculate the extent to which the synapse outgrew its expected radius. *Segmentation of synapses into edge and interior regions* To segment the synapse into interior and edge regions, the synapse are masked using the Lifeact-GFP TIRF images as described above, and the region of the synapse within 2 μm of the edge identified at each time point. The edge region is removed from whole synapse mask to create a second mask for the interior. The whole synapse and interior masks at every time point are then used to generate Delaunay triangulations of the regions. Microclusters are classified based on whether their initial positions were enclosed within the interior Delaunay triangulation (interior microclusters), or were enclosed within the whole synapse triangulation but not the interior triangulation (edge microclusters). By ensuring that microclusters formed within the synapse triangulation, this analysis excludes microclusters formed in nearby cells that might intrude into the image region of the cell being analyzed. Once microcluster track origins are identified, microcluster radial displacements are calculated as described above. *Calculation of Lifeact-GFP intensity derivatives in the regions around microclusters* To calculate the changes in Lifeact-GFP intensity in the regions through which microclusters moved, 1 μm2 regions centered on the microcluster positions are generated at all points in their tracks after the initial position. The average intensities of Lifeact-GFP in the cluster regions are then calculated when the microcluster was centered within each region. From these intensities, the average intensities at the time points before the microcluster entered the patch are subtracted to calculate the cluster region intensity changes (temporal derivatives). The cluster region intensity changes, therefore, served as a proxy for how much the actin filament density changes as microclusters enter regions. These values are plotted against the instantaneous microcluster flows associated with the movements into each patch to examine the correlation between changes in actin density with the direction of radial microcluster flow. *Statistical analyses* Statistical analyses are performed in Prism (GraphPad Software). The Mann-Whitney U test is used for nonparametric comparisons. For data that passes the D’Agostino & Pearson omnibus normality test, Student’s t test is used. For comparing multiple groups, 1-way ANOVA (α = 0.05) with Dunnett’s post-test is used. ### References 1. Albertorio, F. et al. Fluid and Air-Stable Lipopolymer Membranes for Biosensor Applications. *Langmuir* 21, 7476-7482 (2005). - Diaz, A.J., Albertorio, F., Daniel, S. & Cremer, P.S. Double Cushions Preserve Transmembrane Protein Mobility in Supported Bilayer Systems. *Langmuir* 24, 6820-6826 (2008). - Lillemeier, B.F. et al. TCR and Lat are expressed on separate protein islands on T cell membranes and concatenate during activation. *Nat. Immunol*. 11, 90-96 (2010). - Yokosuka, T. et al. Spatiotemporal Regulation of T Cell Costimulation by TCR-CD28 Microclusters and Protein Kinase C θ Translocation. *Immunity* 29, 589-601 (2008). - Hogquist, K.A. et al. T cell receptor antagonist peptides induce positive selection. *Cell* 76, 17-27 (1994). - Jacobelli, J. et al. Confinement-optimized three-dimensional T cell amoeboid motility is modulated via myosin IIA-regulated adhesions. *Nat. Immunol*. 11, 953-961 (2010). - Jacobelli, J., Bennett, F.C., Pandurangi, P., Tooley, A.J. & Krummel, M.F. Myosin-IIA and ICAM-1 Regulate the Interchange between Two Distinct Modes of T Cell Migration. *J. Immunol*. 182, 2041-2050 (2009). - Friedman, R.S., Jacobelli, J. & Krummel, M.F. Surface-bound chemokines capture and prime T cells for synapse formation. *Nat. Immunol*. 7, 1101-8 (2006). - Bubb, M., Spector, I., Beyer, B.B. & Fosen, K.M. Effects of Jasplakinolide on the Kinetics of Actin Polymerization. An explanation for certain in vivo observations. *J. Biol. Chem*. 275, 5163-5170 (2000). - Kolega, J. Phototoxicity and photoinactivation of blebbistatin in UV and visible light. *Biochem. Biophys. Res. Commun*. 320, 1020-1025 (2004). - Sakamoto, T., Limouze, J., Combs, C.A., Straight, A.F. & Sellers, J.R. Blebbistatin, a myosin II inhibitor, is photoinactivated by blue light. *Biochemistry* 44, 584-8 (2005). - Grynkiewicz, G., Poenie, M. & Tsien, R. A new generation of Ca2+ indicators with greatly improved fluorescence properties. *J. Biol. Chem*. 260, 3440-3450 (1985). - Rogers, S.S., Waigh, T.A., Zhao, X. & Lu, J.R. Precise particle tracking against a complicated background: polynomial fitting with Gaussian weight. *Phys. Biol*. 4, 220-7 (2007). ### Acknowledgements The polynomial fit Gaussian weight function was written and made available by S. Rogers (University of Manchester). Lifeact-GFP was a generous gift of R.Wedlich-Soldner (Max Planck Institute of Biochemistry).The InterX MATLAB function was written and made available on the MathWorks File Exchange by “NS”. His-ICAM constructs were provided by B. Lillemeier (Salk Institute) and M. Davis (Stanford University). We thank M. Werner and K. Austgen for assistance in preparing His-ICAM. Biotinylated pMHC monomers were provided by J. Altman (NIH Tetramer Facility, Emory University). ### Figures **Figure 1: A cushioned bilayer system for activating OT1+ T cells** [Download Figure 1](http://www.nature.com/protocolexchange/system/uploads/2166/original/Figure_1.tif?1338927584) *(a) Schematic of the cushioned bilayer system for activating OT1+ T cells. (b,c) Flow cytometric analysis of lipid bilayer standards formed on 5 μm silica microspheres and loaded with a series of concentrations of biotinylated pMHC and his-ICAM protein. Top: microsphere bilayer standards and BMDCs (loaded with 100 ng/ml SIINFEKL peptide) stained with YN1/1.7.4 anti-ICAM (b) and 25D1.16 anti–pMHC (c). Bottom: plots of input protein concentration (log scale) versus the median fluorescence intensities (from the graphs at top) for the bilayer standards and reference BMDCs*. **MATLAB Functions: Analysis MATLAB functions and scripts** [Download MATLAB Functions](http://www.nature.com/protocolexchange/system/uploads/2190/original/MATLAB_Functions.zip?1339972028) *The zip file includes scripts and functions that can be used to analyze microcluster tracks*. ### Associated Publications **Integration of the movement of signaling microclusters with cellular motility in immunological synapses**, Peter Beemiller, Jordan Jacobelli, and Matthew F Krummel. *Nature Immunology* 13 (8) 787 - 795 [doi:10.1038/ni.2364](http://dx.doi.org/10.1038/ni.2364) ### Author information **Peter Beemiller & Matthew Krummel**, Krummel Lab, UCSF **Jordan Jacobelli**, Unaffiliated Correspondence to: Peter Beemiller (peter.beemiller@ucsf.edu) *Source: [Protocol Exchange](http://www.nature.com/protocolexchange/protocols/2403) (2012) doi:10.1038/protex.2012.028. Originally published online 4 October 2012*.
APA, Harvard, Vancouver, ISO, and other styles
10

Angela, Mitt. "education human capital and economic growth in Nigeria." August 13, 2020. https://doi.org/10.5281/zenodo.3982749.

Full text
Abstract:
<strong>Gyeongsang University Turnitin Trash Files</strong> <strong>HUMAN CAPITAL NEXUS AND GROWTH OF NIGERIA ECONOMY</strong> <strong>CHAPTER ONE</strong> <strong>INTRODUCTION</strong> <strong>Background to the Study </strong> Government expenditure equally known as public spending simply refers to yearly expenditure by the public sector (government) in order to achieve some macroeconomic aims notably high literacy rate, skilled manpower, high standard of living, poverty alleviation, national productivity growth, and macro-economic stability. It is also expenditure by public authorities at various tiers of government to collectively cater for the social needs of the people. Generally, it has been revealed that public expenditure plays a key role in realizing economic growth. This is because providing good education to individuals is one of the principal avenues of improving human resource quality in any economy. From this perspective, advancing school enrolment may subsequently lead to economic growth. Therefore, education remains the effective way to subdue poverty, illiteracy, underfeeding and accelerate economic growth in the long-term. Much attention has been channeled towards clarifying the relationship between education and economic growth, and so, has led to series of studies by economists over the past 30 years. There is substantial literature to back the fact that correlation exists between the two. (Sylvie, 2018). In line with the views of Hadir and Lahrech (2015), the fact that humans are the most worthy assets remains undisputable in both developed and developing countries. Therefore efficiency in human resource management is pertinent if development must be realized. In this sense, the major gateway to development is adequate investment in human capital which may be described as an individual&rsquo;s potential economic value in terms of skills, knowledge, and other intangible assets. In order to realize the well-known macroeconomic objective of economic growth, Nigeria being a developing country embarked on some programs in the educational sector with the aim of boosting human capital development. However, these programs have only served as conduits for enriching the corrupt political elite. Given the high prospects of achieving economic growth in Nigeria and the place of human capital development in its actualization, education, therefore, remains a top priority for the Nigeria government as well as concerned researchers. Thus, this study is one among other concerned studies that will attempt to examine economic growth and human capital nexus in Nigeria through education variables. In particular, using education as a measure of human capital, it will attempt to explore the impact of education variables on the growth of Nigeria&rsquo;s economy. According to Wamboye, (2015), education makes way for vital knowledge, skills, techniques and information for individuals to function in family and society. Education can groom a set of educated leaders to take on jobs in government services, public and private firms, and domestic and foreign firms. The growth of education can provide all kinds of grooming that would foster literacy and basic skills. Though alternative investments in the economy could generate more growth, it must not deviate from the necessary contributions; economic as well as non-economic, that education can make and has made to expediting macroeconomic growth (Clark, 2015). Todaro and Smith in Clark (2015), likewise called attention to the fact that, extension of education lead to an increasingly gainful labor force and provide it with expanded information and abilities, and boost employment and income-earning avenues for educators, schools, and employees. Economic growth, proxied by Gross Domestic Product (GDP), gives numerous advantages which include increasing the general living standard of the masses as estimated by per capita pay (income), making the distribution of income simpler to accomplish, thus, shortening the time span needed to achieve the fundamental needs of man to a considerable majority of the masses. The main source of per capita yield (output) in any nation, regardless of whether it is advanced or developing, is really increment in &#39;human productivity&#39;. Per capita yield (output) growth is notwithstanding a significant aspect of economic prosperity (Abramowitz, 1981). For the most part, it has been uncovered that individuals are the most important source of productivity growth and economic prosperity. Technology and technological hardware are the results of human inventions and innovativeness. The suggestion of UNESCO, that 26% of yearly planned expenditure (budget) in developing nations should be dedicated to education has become intangible, particularly in Nigeria. Planned expenditure on education in Nigeria ranges from only 5%-7% of total planned expenditure. The impact of the above situation on the economic prosperity of the nation as it concerns human capital development, capacity building, infrastructural advancement, etc, is troubling. On this note, the necessity of a well-thought out plan for rectifying this unwanted situation can&#39;t be over stressed. &nbsp; <strong>1.2 Statement of the Problem </strong> Sikiru (2011) as cited in Ajibola (2016) rightly pointed out that the role of education in any economy is no longer business as usual because of the knowledge based globalized economy where productivity greatly depends on the quantity and quality of human resource, which itself largely depends on investment in education. Governments continue to increase spending on education with a view toward enhancing the standard of education, build human capacity and attainment of economic growth. Ironically, this effort by government is still a far cry of UNESCO&rsquo;s recommendation of 26% total annual budget to education, and so, has not yielded the expected results. Thus, researchers sought to understand the relationship between government expenditure on education and economic growth and how they influence each other. These researches on the above subject matter, have given rise to divergent school of thoughts. Over time, Nigeria has indicated willingness to develop&nbsp; education in order to curtail illiteracy and quicken national development. Anyway regardless of the irreproachable evidence that education is key to the improvement of the economy; there exists a wide loop-hole in accessibility, quality and fairness (equity) in education (Ayo, 2014). Empirically verifiable facts in recent years have indicated that the Nigeria&nbsp;education system has continuously turned-out graduates who overtime have defaulted in adapting to evolving techniques and methods of production; due to inadequate infrastructure, underfunding, poor learning aids, outmoded curriculum, dearth of research and development. This has resulted to drastic reduction in employment and the advent of capacity underutilization. This paper assesses growth of Nigeria economy in relation to government expenditure on education and school enrollment from 1981 to 2018. Frequent adjustments and changes in education system in Nigeria, points to the fact that, all is not well with the countries education system. Government have experimented 6-3-3-4, 9-3-4 systems of education, among others. Enrollment in schools forms the main part of investment in human capital in most of the world&rsquo;s societies (Schultz, 2002). There are several explanations concerning why improvement in scholastic quality is not forthcoming in Nigeria as regards the above subject matter. Researchers disagree on whether changes in education attainment levels alters economic growth rate in the long-term. &nbsp;&ldquo;In Nigeria, average public education expenditure to total government expenditure between 1981 and 2018 is 5.68 per cent. It ranged between 0.51 and 10.8 per cent during the period under review&rdquo; (CBN Statistical Bulletin, 2019). However, the major problem therefore, is that despite an increase in the numeric value of budgetary allocation to education in Nigeria over the years, they still fall short of 26 % UNESCO,S recommendation. For instance, 2014, 10.6%; 2015, 9.5%; 2016, 6.1%, 2017, 5.41%, 2018, 7.0% and 2019, 7.2% percent respectively of total annual budget to education. The statistics presented above indicates that investment in education has not produced the desired level of human capital and economic growth in Nigeria. These uncertainties as it relates to government expenditure on education, school enrollment and growth of Nigeria economy gave birth to this research work. Furthermore, most studies relating to the subject matter, conducted analysis on times series data without subjecting these data sets to structural breaks, thereby giving rise to spurious results and therefore, unreliable recommendations. For instance, unit root test with structural breaks were not employed in majority of these studies. <strong>1.3 Research Questions </strong> The issues raised above have provoked series of questions which this study attempts to provide answers. These questions include; i. To what extent does government expenditure on education affect growth of Nigeria economy? ii. To what extent does primary school enrollment affect growth of Nigeria economy? iii. To what extent does secondary school enrollment affect growth of Nigeria economy? iv. To what extent does tertiary school enrollment affect growth of Nigeria economy? <strong>1.4 Objectives of the Study </strong> The main objective of the study is to access the effect of government expenditure on education and growth of Nigeria economy. Specific objectives of the study are to; i. Access the effect of government expenditure on growth of Nigeria economy. ii. Access the effect of primary school enrollment on growth of Nigeria economy. iii. Access the effect of secondary school enrollment on growth of Nigeria economy. iv. Access the effect of tertiary school enrollment on growth of Nigeria economy. <strong>1.5 Hypotheses of the Study </strong> The following hypotheses were tested in this study. i. Government expenditure on education has no significant effect on growth of Nigeria economy. ii. Primary school enrollment has no significant effect on growth of Nigeria economy. iii. Secondary school enrollment has no significant effect on growth of Nigeria economy. iv. Tertiary school enrollment has no significant effect on growth of Nigeria economy. <strong>1.6 Scope of the Study </strong> The study covers the time series analysis of government expenditure on education, school enrolment; primary, secondary and tertiary, and growth of Nigeria economy from 1981 to 2018. Based on available data. Justification for this study is on the premise that, time series data used for the study is a current data on government expenditure on education, education enrolment and economic growth in Nigeria. This study used annual data for the period 1981-2018, collected from the CBN Statistical Bulletin (2019) and World Bank databank. Variables employed for the study include; Real GDP Per Capita, government expenditure on education, primary, secondary and tertiary school enrolment. <strong>1.7 Significance of the Study </strong> Models of economic growth provide useful predictions that inform decisions made by policy makers. Agreeing with policy options based on inaccurate research studies could undermine government intervention particularly in the education sector. A good perception of the interaction among investment in education, its outcome, school enrolment and economic growth is appropriate policy measure, guarantees human capital development. Thus, a representative model that take cognisance of inter-play among public education expenditure, school enrolment and growth of the economy will lead to adequate disbursement and utilization of government funds. The outcome of this research will serve as a tool for policy makers in the Ministries of Finance, Education and the National Planning Commission including regulatory agencies not mentioned here. It will also serve as a reference material for subsequent research work in this field. <strong>1.8 Limitation of the Study </strong> This research x-rays Government Expenditure on Education, school (primary, secondary and tertiary) enrolment as they relate to Growth of Nigeria Economy. Time series data covering the period 1981 to 2018 is used for this study. A study undertaken in 2020, but can not access 2019 data on the variables used, stand as one of the limitations, since lag periods are essential in policy implementation. Data availability, genuineness and accuracy of same, time and financial constraints, constitute limitations to this research work. Effect of corruption on government expenditure and outbreak of Corona virus, resulting to closure of tertiary institutions in Nigeria, also constitute limitation to this study. <strong>1.9 Organization of the Study </strong> This research work comprises of five (5) chapters, these includes; Chapter one: this consists of background to the study, Problem Statement, research questions, research hypothesis and scope of the study. Chapter two: consisting of conceptual framework, theoretical review, review of related literatures and theoretical framework. Chapter three: explained the methodology this research adopted. Chapter four: presentation of results and discussion of findings. Chapter five: consists of summary of findings, conclusion, policy recommendation, contribution to knowledge and suggestion for further studies.&nbsp; <strong>CHAPTER TWO</strong> <strong>LITERATURE REVIEW AND THEORITICAL FRAMEWORK</strong> <strong>2.1 Conceptual Review</strong> <strong>2.1.1 Government&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </strong> Government is the sector of the economy focusing on giving different public services. Its structure differs by nation, yet in many nations, government involves such services as infrastructure, military, police, public travel, government provided education, alongside medical services and those working for the public sector itself, like, elected authorities. The government may offer types of assistance that a non-taxpayer can&#39;t be barred from, (for example, street lighting), goods which aids all of society instead of benefiting only one person. Finances for government goods and services are generally obtained through various techniques, including taxes, charges, and through monetary transfers from different tiers of government (for example from federal to state government). Various governments from around the globe may utilize their own strategies for financing public goods and services. <strong>2.1.2 Government Expenditure</strong> Government Expenditure refers to government spending both capital and recurrent. For the purpose of this study we limited our scope to government educational expenditure in Nigeria. The theory of government expenditure is the theory of the costs of availing goods and services through planned spending (budget). There are two ways to deal with the subject of growth of government, precisely, the expansion in total size of government spending and the expansion of government in terms of economic magnitudes. Government expenditure is spending made by the public sector (government) of a nation on aggregate needs and wants, for example, pension and arrangement of infrastructure, among others. Until the nineteenth century, government speding was constrained, as free enterprise theorists believed that financial resources left in the private sector could lead to higher returns. In the twentieth century, John Maynard Keynes advocated the job of government spending in influencing levels of wages and income distribution in the economy. From that point forward government spending has demonstrated an expanding pattern. The public expenditure trend of the government of a nation is essentially the manner in which assets (resources) are distributed to the various segments of the economy where spending is required. It is exemplified in the government&rsquo;s ways of spending money. In analyzing the trend of government spending hence, it is critical to realize that under a federal system of administration, government job in dealing with the economy is the joint duty of the different tiers of government (Eze and Ikenna 2014). <strong>2.1.3 Human Capital </strong> By and large, human capital is characterized as all skills that are indistinguishably helpful to numerous organizations, including the training organization. Industry-specific skills, conversely, foster efficiency (productivity) just in the industry in which the skills were obtained. In a serious market setting, laborers consistently get a pay that approaches their minor profitability and in this manner, on account of general human capital, laborers win a similar compensation any place they work. <strong>2.1.4 Economic Growth</strong> As per Haller (2012), economic growth or economic expansion means the way toward expanding the size of a country&rsquo;s economy, its macro-economic indicators, particularly the per capita GDP, in an incremental yet not mandatorily linear course, with beneficial outcomes on the socio-economic sector. IMF (2012) perceives economic expansion as the expansion in the market worth of commodities created in a country over a period of time after discounting for inflation. The rate of increment in real Gross Domestic Product is often used as an estimate of economic expansion. In the perspectives of Kimberly (2012), economic expansion is an expansion in the creation of commodities. Any expansion in the worth of a nation&rsquo;s created commodities is likewise characterized as economic expansion. Economic expansion means an expansion in real GNP per unit of labor input. This relates to labor efficiency variation with time. Economic expansion is routinely estimated with the pace of increment in GDP. It is often estimated in real terms (deducting the impact of inflation on the cost of all commodities created). Growth improves the living standard of the individuals in that specific nation. As per Jhingan (2004), one of the greatest aims of money policy approach as of late has been quick macroeconomic expansion. He thus, characterized economic prosperity (growth) as the event whereby the real per capita earnings (income) of a nation increments over a significant stretch of time. Economic expansion is estimated by the expansion in the quantity of commodities created in a nation. An expanding economy creates more commodities in each subsequent timespan. In this manner, growth happens when an economy&#39;s capacity to produce increases which in turn, is utilized to create a greater quantity of commodities. In a more extensive perspective, economic expansion means increasing the living standard of individuals, and reducing disparities in earnings. &nbsp; <strong>2.1.5 </strong><strong>Gross Domestic Product</strong> - GDP Investopedia designates Gross Domestic Product (GDP) as the financial worth of marketable commodities created in a nation during any given duration of time. It is normally computed on a yearly or a quarterly premise. It comprises household and government consumption, government pay-outs, investments and net exports that exist in a sovereign territory. Set forth plainly, GDP is a broad estimation of a country&#39;s aggregate economic activity. &nbsp; <strong>2.1.6 Education</strong> There is no singular meaning of education and this is on the grounds that it indicates various things to various individuals, cultures and societies (Todaro and Stephen, 1982). Ukeje (2002), considers education to be a process, a product and a discipline. When viewed as a process, education is a group of activities which involves passing knowledge across age-groups (generations). When viewed as a product, education is estimated by the characteristics and attributes displayed by the educated individual. Here, the informed (educated) individual is customarily considered to be an informed and refined individual. While as a discipline, education is perceived in terms of the pros of well-structured knowledge which learners are acquainted with. Education is a discipline concerned with techniques of giving guidance and learning in institutions of learning in lieu of informal socialization avenues like rural development undertakings and education via parent-child interactions). It comprises both inherent (intrinsic) and instrumental worth. It is attractive for the person as well as for the general public. Education as private commodity directly aids the individuals who get it, which thusly influences the person&#39;s future pay (income) stream. At the macroeconomic level, a workforce that is superior in terms of education is thought to expand the supply of human capital in the economy and increment its efficiency (productivity). Considering the externalities pervasive in education, it is broadly acknowledged that the state has a key task to carry out in guaranteeing fair distribution of educational chances (opportunities) to the whole populace. This is especially critical in developing nations, for example, Nigeria that experiences the ill effects of elevated poverty levels, inequality and market imperfections. Enrolment might be viewed as the process of commencing participation in a school, which is the number of learners (students) adequately registered as well as participating in classes (Oxford dictionaries). 2.1.7 Primary Education Pupils usually commence learning at the elementary level when they are as old as 5 years or more. Pupils go through 18 terms equivalent to 6 years at the elementary level and may be awarded a first school leaving certification upon successful completion of learning. Subjects treated at the elementary stage comprise arithmetic, foreign and indigenous languages, culture, home economics, religious studies, and agric science. Privately owned institutions of learning may opt to treat computer science, and fine arts. It is mandatory to participate in a Common Entrance Examination in order to meet requirements for induction into secondary institutions of learning. <strong>2.1.8 Secondary Education In Nigeria</strong> Decades after the advancement of elementary education, government gave attention to secondary education, because of the requirement for pupils to advance their education in secondary schools. Secondary education is defined as the completion of fundamental education that started at the elementary level, and seeks to establish the frameworks for long-term learning and human development, by providing subject and skill-centred guidance. It is equally a link between elementary learning and tertiary learning. It is given in two phases, junior and senior levels of three years each and it is six-year duration. It was only in 1909 that the colonial administration began to supplement the endeavors of the Christian Missions in giving secondary education. This was when King&#39;s College was established in Lagos as the colonial government&#39;s secondary institution of learning. As per Adesina and Fafunwa , numerous laws were enacted to improve the condition of secondary education in Nigeria. For the duration of the time the nation was under Colonial Governments, there were scarcely any secondary schools to give secondary education to those that were then ready to gain it. 2.1.9 Tertiary Education Institutions of tertiary learning comprise universities, colleges of education, polytechnics and monotechnics. Government has dominant control of university education, and regulates them through National Universities Commission (NUC). At the university level, first year selection criteria include: At least 5 credits in not more than two sittings in WAEC/NECO; and a score above the 180 benchmark in the Joint Admission and Matriculation Board Entrance Examination (JAMB). Prospective entrants who hold satisfactory national certificates of education (NCE) or national diplomas (ND) having 5 or more ordinary level credits may gain direct entry into universities at the undergraduate level. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <strong>2.2 Theoretical Review</strong> <strong>2.2.1 Wagner&rsquo;s Law of Expending State Activity </strong> Public expenditure has one its oldest theories rooted in Adolph Wagner&rsquo;s (1883) work. A German economist that came up with a fascinating hypothesis of development in 1883 which held that as a country builds its public sector up, government spending will consequently become more significant. Wagner built up a &ldquo;law of increasing state activity&quot; after empirical investigation on Western Europe toward the conclusive part of the nineteenth century. Wagner&#39;s Law as treated to in Likita (1999) contended that government administration development is a product of advancement in industrialization and economic development. Wagner believed that during industrialization, the expansion of real earnings per capita will be accompanied by increments in the portion of government spending in total spending. He stated that the coming of industrial communities can bring about greater political impetus for social advancement and expanded earnings. Wagner (1893) stated three central reasons for the expansion in state spending. To start with, activities in the public sector will supplant non-private sector activities during industrialization. State duties like authoritative and defensive duties will increment. Furthermore, governments will be expected to give social services and government assistance like education, and public health for the elderly, subsidized food, natural hazards and disaster aids, protection programs for the environment and social services. Thirdly, industrial expansion will lead to novel&nbsp; technology and erode monopoly. Governments will need to balance these impacts by offering public goods through planned spending. Adolf Wagner in Finanzwissenschaft (1883) and Grundlegung der politischen Wissenschaft (1893) identified state spending as an &ldquo;internal&rdquo; factor, controlled by the development of aggregate earnings. Thus, aggregate earnings give rise to state spending. Wagner&#39;s may be viewed as a long-term phenomenon which is best observed with lengthy time-series for better economic interpretation and factual (statistical) derivations. This is because these patterns were expected to manifest after 5 or 10 decades of present day industrial community. The hypothesis of public spending is the hypothesis of the costs of availing commodities through planned government spending as well as the theory of policies and laws enacted to bring about private spending. Two ways to deal with the topic of the growth of the government sector are, namely, the expansion in volume of non-private spending and the expansion of non-private sector. Okafor and Eiya (2011) investigated the factors responsible for increment of government spending utilizing BLUE-OLS estimator. They discovered that population, government borrowing, government income, and inflation significantly affected government at the 5% level, while inflation most certainly did not. Further, Edame (2014) examined the predictive factors of state infrastructure spending in Nigeria, utilizing error correction modeling. In this study, it was found that growth-pace of urbanization, public income, density of population, system of government, and foreign reserves collectively or separately impact Nigeria&rsquo;s state infrastructure spending. &nbsp; <strong>2.2.3</strong><strong>The Classical Theory of Economic Growth</strong> This theory signifies the underlying structure of economic reasoning and Adam Smith&#39;s &quot;The Wealth of Nations&quot; (1776) typically paves the way for classical economics. Prominent and remarkable advocates of the classical school are: Adam Smith (1723-1790), David Ricardo (1772-1823), Thomas Malthus (1766-1834), Karl Marx (1818-1883), John Stuart Mill (1808-1873), Jean-Baptiste Say (1767-1832) and so on. Basically Smith&#39;s theory says that the endowment of countries was put together not with respect to gold, but with respect to commerce: As when two economic agents trade valuable commodities, in order to reap the benefits of trade, endowment grows. The classicalists see that markets are self-regulating, when liberated from compulsion. The classicalists termed this figuratively as the &quot;invisible hand&quot;, which establishes equilibrium, when consumers choose among various suppliers, and failure is allowed among firms that fail to compete successfully. The classicalists often warned against the risks of &ldquo;trust&rdquo;, and emphasized on free market economy (Smith, 1776). Adam Smith connected the expansion in endowment of individuals to the expansion of the yield of production factors, which manifests in the improvement of productivity of labor and an expansion in the quantity of working capital. Much scrutiny was given to population expansion, to the expansion in the portion of laborers in material production, to investment and geographical findings, which added to far-reaching prosperity. The perspectives of Thomas Malthus on economic expansion, portraying the expansion of populace and the expansion in production appeared pessimistic. As per Malthus, when the proportion between population expansion and subsistence methods&nbsp; remains, when the populace is expanding increasingly, and subsistence methods expand steadily, the aftermath will be inadequate earth resources (land), and consequently a severe battle for few resources, the prevalence of wars, plagues, hunger, mass illness, etc (Ojewumi and Oladimeji, 2016). As a solution to this issue, Malthus proposed to limit the growth of the populace by the &quot;call to prudence&quot;, particularly the impoverished, and the birth of children on the bases that they were to be provided with decent means of subsistence. One among the most compelling classicalists was David Ricardo (1772 &ndash; 1823). Apparently, the hypothesis of comparative advantage which recommends that a country should engage exclusively in internationally competitive businesses and trade with different nations to acquire commodities lacking domestically is his most notable contribution. He contended the possibility of the presence of a natural market wages and expected that new technologies will result to a fall in the demand for labor. John Stuart Mill (1808-1873) to a great extent summarized the past ideologies of the classicalists. Specifically, he finished the classicalists&rsquo; hypothesis of economic dynamics that considers long-term economic patterns. At the core of this idea is the unceasing amassing of capital. As indicated by the hypothesis, the expansion in capital prompts an increment in the need for labor, and zero population growth gives rise to increment in real earnings, and therefore gives rise to population expansion in the long-run. When the amassing of capital is quicker than the expansion in the workforce, both of these processes can, in principle, remain forever. Increment in the quantity of laborers means having more &quot;mouths&quot;, hence the expansion in the demand for consumption and particularly for food. Food created in agribusiness, which, as we know, characterized by diminishing returns to scale. Therefore, issues of diminishing marginal productivity of capital emerge and the fall of incentives to invest. <strong>2.2.4 The Keynesian Approach of Public Expenditure </strong> John M. Keynes (1936), a British Economist and the pioneer of macroeconomics contended that public spending is a crucial determinant of economic posperity. Keynes hypothesis clearly stated that fiscal policy instrument (for example government expenditure) is a significant apparatus for obtaining stability and better economic expansion rate in the long-term. To obtain stability in the economy, this hypothesis endorses government action in the economy through macroeconomic policy especially fiscal policy. From the Keynesian view, government spending will contribute incrementally to economic expansion. Keynes contended that it is necessary for government to mediate in the economy since government could change financial downturns by raising finances from private borrowing and afterward restoring the funds to the private sector through several spending programs. Likewise, government capital and recurrent spending in the structure provision of class rooms, research centers, acquisition of teaching and learning aids including PCs and payment of salary will have multiplier effects on the economy. Spending on education will boost productivity as well as advancement by improving the quality of labour. It will likewise help in developing a stream of educated administrators in both the private and public sectors of the economy. Keynes classified public spending as an exogenous variable that can create economic prosperity rather than an endogenous phenomenon. In summary, Keynes acknowledged the functioning of the government to be significant as it can prevent economic downturn by expanding aggregate demand and in this manner, switching on the economy again by the multiplier effect. It is an apparatus that proffers stability the short-term yet this should be done carefully as excessive government spending leads to inflation while lack of spending aggravates unemployment. &nbsp; <strong>2.2.5 </strong><strong>Human Capital Theory</strong> Human capital theory, at first developed by Becker (1962), contends that workers have a set of abilities which they can improve or acquire by learning and instruction (education). Be that as it may, human capital hypothesis often assume for the most part expect that experiences are converted into knowledge and skills. It helps us comprehend the training activities of organizations. It (re-)introduced the view that education and training add up to investment in future efficiency (productivity) and not only consumption of resources. From this viewpoint, both firms and labourers rely upon investment in human capital to foster competitiveness, profitability, and earnings. In spite of the fact that these advantages are self-evident, these investments are tied to some costs. From the firm&#39;s perspective, investment in human capital contrast from those in physical capital, because the firm doesn&#39;t gain a property right over its investment in skills, so it and its employees need to agree on the sharing of costs and benefits derived from these investments. While investment in physical capital are solely the organization&#39;s own choice, investment in the abilities (skills) of its workforce include interaction with the workers to be groomed. In the basic formulation, Becker, assuming that commodity and labour markets are perfectly competitive, introduced the distinction between firm-specific and general human capital to answer the question: who bears the expenses of training? &nbsp; <strong>2.2.6 Neoclassical Growth Theories </strong> The neoclassical development hypotheses arose in the 1950s and 1960s, when regard for the issues of dynamic equilibrium declined and the issue of actualizing growth potentials through the adoption of novel technology, boosting productivity and improving the organization of production gained popularity. The principle advocates of this school are Alfred Marshall (1842-1924), Leon Walras (1834-1910), William Stanley Jevons (1835-1882), Irving Fisher (1867-1947) and others. The American economist Robert Solow (1924-present) along with other economists opposed the state&#39;s participation and rather supported the notion of permitting firms to competitively grow by utilizing the majority of the assets accessible to them. They hinged on the production theory and marginal productivity theory from the classical school, according to which, the earnings obtained production factors depend on their marginal products. Neoclassical scholars disagreed with neo-Keynesian views on growth on three grounds (UN, 2011): Firstly, in light of the fact that they are centered around capital accumulation, overlooking land, labour, technology and so on; On the second note, owing to the fact that they are rooted in the unchanging nature of capital share in earnings (income); On the third note, while the neoclassicists recognized the self-restoring equilibrium of the market mechanism, the former overlooked it. On this premise, they identified inflationary government spending as a source of instability in the economy. <strong>2.2.7 The Endogenous Growth Theory</strong> This was created as a response to exclusions and inadequacies in the Solow-Swan model. This theory throws light on long-term economic expansion pace based on the pace of population expansion and the pace of technical advancement which is autonomous with regards to savings rate. Since long-term economic expansion rate depended on exogenous factors, Romer (1994) saw that the neoclassical hypothesis had hardly limited implications. As per Romer, in models with exogenous technical change and exogenous population expansion, it never truly made a difference what the public administration did. The new growth theory doesn&#39;t rebuff the neoclassical growth theory. Perhaps it broadens the neoclassical growth hypothesis by incorporating endogenous technical advancement in growth models. The endogenous development models have been improved by Kenneth J. Arrow, Paul M. Romer, and Robert E. Lucas. The endogenous development model highlights technical advancement arising from pace investment, quantity of capital, and human capital supply. Romer saw natural assets as a lower priority than ideas. He refers to case of Japan which has limited natural assets but welcomed novel ideas and technology from the West. These included improved plans for production of producer durable goods for final production. Accordingly, ideas are key in economic prosperity. With respect to endogenous growth theory, Chude and Chude (2013) submitted that the major improvement in the endogenous growth hypothesis in relation to the past models lies in the fact that it treats the determinants of technology. That is, it openly attempts to model technology instead of expecting it to be exogenous. Momentously, it is a statistical clarification of technological improvement that introduced a novel idea of human capital, knowledge and abilities (skills) that empower employees to be increasingly productive. More often than not, economic expansion is a product of progress in technology, arising from effective utilization of productive resources through the process of learning. This is because human capital development has high rate or increasing rate of return. Therefore, the rate of growth depends heavily on what (the type of capital) a country invests in. Thus to achieve economic expansion, public expenditure in human capital development especially education spending must be increased. At the same time, the theory predicts unexpected additional benefits from advancement of a substantial valued-added knowledge economy, that can develop and preserve a competitive advantage in expanding industries. <strong>2.3 Empirical Review</strong> Bearing in mind the sensitive nature of the field being studied, many investigations had been conducted with the aim of clarifying the divergent ideological schools. For example, Amadi, and Alolote, (2020) explored government infrastructural spending and Nigeria&rsquo;s economic advancement nexus. The investigation uncovered that public spending on transport, communication, education and medical infrastructure significantly affect economic expansion, while spending on agric and natural resources infrastructure recorded a major adverse impact on economic expansion. Despite the fact that the investigation is recent, the time series variables were not exposed to unit root tests with breaks, and thus will yield misleading outcomes. Shafuda and Utpal (2020) explored government spending on human capital and Namibia&rsquo;s economic prosperity (growth) nexus from 1980 to 2015. The examination utilized human development indicators like healthcare outcomes, educational accomplishments and increment in national earnings in Namibia. The investigation uncovered huge effects of government spending on medical care and education on GDP expansion over the long-term. Study conducted in 2020 that utilized data from 1980-1915, comprises a setback to this work. Ihugba, Ukwunna, and Obiukwu (2019) explored government education spending and Nigeria&rsquo;s elementary school enrolment nexus by applying the bounds testing (ARDL) method of cointegration during the time of 1970 to 2017. The model utilized for the investigation attempted to recognize the interaction between two variables and their relationship with control variables; per capita earning (income), remittances, investment and population expansion. The bounds tests indicated that the variables that were studied are bound together over the long-term, when elementary school enrolment is the endogenous variable. The investigation saw that an inconsequential relationship exists between government education spending on elementary school enrolment while a positive relationship exists among remittances and primary school enrolment. Sylvie (2018) explored education and India&rsquo;s economic expansion nexus. The investigation inspected the connection among education and economic prosperity in India from 1975 to 2016 by concentrating on elementary, secondary and tertiary levels of education. It used econometric estimations with the Granger Causality Method and the Cointegration Method. The study indicated that there is convincing proof demonstrating a positive association between education levels and economic expansion in India which may impact government activities and shape the future of India. Ayeni, and Osagie (2018), explored education spending and Nigeria&rsquo;s economic expansion nexus from 1987 to 2016. The investigation uncovered that education spending was inconsistent with education sectoral yield (output), while recurrent education spending had meaningful relationship with real gross national output (or GNP), conversely, capital spending on education was weak. Ogunleye, Owolabi, Sanyaolu, and Lawal, (2017), utilized BLUE-OLS estimator to study the effect of advancement in human capital on Nigeria&rsquo;s economic expansion from 1981 to 2015. The empirical outcomes indicated that human capital development has strong effects on economic expansion (growth). Likewise, human capital development variables; secondary school enrolment, tertiary enrolment, aggregated government spending on health and aggregated government spending on education displayed positive and strong effect on economic expansion of Nigeria. Glylych, Modupe and Semiha (2016) explored education and Nigeria&rsquo;s economic expansion nexus utilizing BLUE-OLS estimator to unveil the interaction between education as human capital and real Gross Domestic Product. The investigation found a strong connection between GDP and different indicators (capital spending on education, recurrent spending on education, elementary school enrolment and secondary school enrolment) utilized in the investigation except for elementary school enrolment (PRYE). Lingaraj, Pradeep and Kalandi (2016) explored education expenditure and economic expansion nexus in 14 major Asian nations by utilizing balanced panel data from 1973 to 2012. The co-integration result indicated the presence of long-run relationships between education spending and economic expansion in all the nations. The findings additionally uncovered a positive and significant effect of education training on economic advancement of all the 14 Asian nations. Further, the panel vector error correction showed unidirectional Granger causality running from economic expansion to education spending both in the short and long-run, however, education spending only Granger causes long-run economic expansion in all the nations. The findings likewise demonstrated a positive effect of education spending on economic expansion. The study contended that education sector is one of the significant elements of economic expansion in each of the 14 Asian nations. A significant portion of government spending ought to be made on education by upgrading different essential, senior and technical educations in the respective countries to make available the skilled labour for long-term economic advancement. Ojewumi and Oladimeji (2016) explored government financing and Nigeria&rsquo;s education nexus. In the research work, public spending on education was arranged into two classes (recurrent and capital spending). The data covered the period 1981 to 2013 and were secondary in nature. The data were gotten for the most part from the publications of the World Bank, Central Bank of Nigeria and National Bureau of Statistics. BLUE-OLS estimator was utilized to study the data. The main results indicated that the effect of both capital and recurrent spending on education expansion were negative during the examination time frame. The study suggested that the elevated level of corruption common in the educational sector ought to be checked to guarantee that finances ear-marked for education particularly capital spending in the sector are prudently appropriated. Government at various levels ought to likewise increment both capital and recurrent spending to support the educational sector up to the United Nations standard. Obi, Ekesiobi, Dimnwobi, and Mgbemena, (2016) explored government education spending and Nigeria&rsquo;s education outcome nexus from 1970 &ndash; 2013. The investigation utilized BLUE-OLS estimator, and demonstrated that government spending on education has a cordial and notable impact on education. Public health spending and urban population expansion were likewise found to positively affect education outcome but are insignificant in influencing education outcome. Omodero, and Azubike, (2016), explored government spending on education and Nigeria&rsquo;s economic advancement nexus from 2000&ndash;2015. Multiple regression analysis and student t-test were applied for investigation. The outcome of the investigation showed that education spending is significant and affects the economy. Additionally, education enrolment demonstrated a significant relationship with GDP but minor effect on the economy. Muhammad and Benedict (2015) explored education spending and Nigeria&rsquo;s economic expansion nexus during the time covering 1981-2010. Co-joining and Granger causality tests were utilized so as to unveil the causal nexus between education spending and economic expansion. They found that there is co-integration between real growth rate of GDP, aggregated government spending on education, recurrent expenditure on education and elementary school enrolment. Adeyemi and Ogunsola (2016) explored advancement in human capital and Nigeria&rsquo;s economic expansion nexus from 1980-2013 on secondary school enrolment, life expectancy rate, government spending on education, gross capital formation and economic expansion rate. ARDL cointegration approach was utilized in the investigation and it uncovered a positive since a long-run nexus among secondary school enrolment, life expectancy rate, government spending on education, gross capital formation and economic expansion rate. Olalekan (2014) explored human capital and Nigeria&rsquo;s economic expansion nexus utilizing yearly data on education and health, from 1980 to 2011. The investigation made use of Generalized Method of Moment (GMM) techniques in the research and the estimated outcomes gave proof of positive connection between human capital and economic expansion. Oladeji (2015) explored human capital (through education and effective services in healthcare) and Nigeria&rsquo;s economic expansion nexus from 1980 to 2012. The investigation utilized BLUE-OLS estimator and uncovered that there is a significant functional and institutional connection between the investment in human capital and economic expansion. The work indicated that a long-term nexus existed between education and economic expansion rate. Hadir and Lahrech, (2015) explored human capital advancement and Morocco&rsquo;s economic expansion nexus utilizing yearly data from 1973 to 2011. The BLUE-OLS estimator was incorporated utilizing aggregated government spending on education and health, the enrolment data of tertiary, secondary and elementary educational institutions as a measure for human capital. The research uncovered a positive nexus between aggregated government spending on education, aggregated government spending on health, elementary education enrolment, secondary education enrolment and tertiary education enrolment. Obi and Obi (2014) explored education spending and Nigeria&rsquo;s economic expansion nexus as a method for accomplishing ideal socio-economic change required from 1981 to 2012. The Johansen co-integration method and BLUE-OLS estimator econometric methods were utilized to closely study the connection between GDP and recurrent education spending. The results showed that regardless of the fact that a positive relationship was obtainable between education spending and economic expansion, a long-term nexus was not obtainable over the period under examination. Jaiyeoba (2015) explored investment in education/health and Nigeria&rsquo;s economic expansion nexus from 1982 to 2011. He utilized trend analysis, the Johansen cointegration and BLUE-OLS estimator. The outcomes demonstrated that there was long-term connection between government spending on education, health and economic expansion. The factors: health and education spending, secondary and tertiary enrolment rate and gross fixed capital formation carried the speculated positive signs and were notable determinants (apart from government spending on education and elementary education enrolment rate). Sulaiman, Bala, Tijani, Waziri and Maji (2015) explored human capital /technology and Nigeria&rsquo;s economic expansion nexus. They utilized yearly time series covering 35 years (1975-2010) and applied autoregressive distributed lag method of cointegration to look at the connection between human capital, technology, and economic expansion. Two measures of human capital (secondary and university enrolment) were utilized in two different models. Their outcome uncovered that all the factors in the two separate models were cointegrated. Besides, the findings from the two assessed models indicated that human capital in measured by secondary and tertiary education enrolments have significant positive effect on economic expansion. Borojo and Jiang (2015) explored education/health (human capital) and Ethiopia&rsquo;s economic expansion nexus from 1980 to 2013. Human capital stock is measured by elementary, secondary and tertiary education enrolment. Human capital investment is proxied by spending on health and education. The Augmented Dickey Fuller test and Johansen&#39;s Co-integration method were utilized to test unit root and to ascertain co-integration among factors, respectively. Their investigation indicated that public spending on health as well as education and elementary as well as secondary education enrolments has positive and significant impacts on economic expansion both in the short-term and the long-term. Ekesiobi, Dimnwobi, Ifebi and Ibekilo (2016) explored public education investment and Nigeria&rsquo;s manufacturing yield nexus. The investigation utilized Augmented Dickey Fuller (ADF) unit root test and BLUE-OLS estimator to examine the connection between public educational spending, elementary school enrolment rate, per capita income, exchange rate, FDI and manufacturing yield (output) rate. The investigation discovered that public education spending has a positive but inconsequential impact on manufacturing yield (output) rate. Odo, Nwachukwu, and Agbi (2016) explored government spending and Nigeria&rsquo;s economic expansion nexus. Their finding demonstrated that social capital had inconsequential positive effect on economic expansion during the period under consideration. Jiangyi, (2016) explored government educational spending and China&rsquo;s economic expansion nexus bearing in mind the spatial third-party spill-over effects. The findings uncover that public educational spending in China has a significant positive effect on economic expansion, but spending in various educational level shows varying outcomes. Public educational spending beneath high-education is positively related with domestic economic expansion, while the impact of educational spending in high-education is inconsequential. Lawanson (2015) explored the importance of health and educational elements of human capital to economic expansion, utilizing panel data from sixteen West African nations over the period 1980 to 2013. He utilized Diff-GMM dynamic panel procedure. The empirical results show that coefficients of both health and education have positive and significant impacts on GDP per capita. The paper ascertains the importance of human capital to economic expansion in West Africa. He suggested that more assets and policies to persuade and improve access to both education and health by the populace ought to be sought after by policy makers. Ehimare, Ogaga-Oghene, Obarisiagbon and Okorie (2014), explored the connection between Nigerian government Expenditure and Human Capital Development. The level of human capital development, which is a measure of the degree of wellbeing (health) and educational achievement of a country influence the level of economic activities in that country. The unit root test was employed to ascertain if the stationary or non-stationary with the Phillip Peron test. So as to measure the efficiency of government spending on human capital development, the data analysis was performed with Data Envelopment Analysis including Input Oriented Variable Return to Scale. The discoveries of the study uncovered that there has been substantial decrease in the efficiency of government spending since 1990 up till 2011 which has been diminishing. Ajadi and Adebakin (2014), investigated the nature of association between human capital development and economic expansion. The descriptive survey method of research was incorporated and multi&ndash;stage sampling method was utilized to choose a size of 200 respondents utilized for the research. An adopted questionnaire with 0.86 reliability index was utilized for information gathering. Data gathered were examined utilizing the Pearson&#39;s Product Moment Correlation Coefficient. The results demonstrated that education has a predictive r-value of 0.76 on individual personal earnings and the type of occupation (job) is linked with individual personal earnings (r=0.64). It, subsequently, concluded that economic expansion rate is influenced by individual personal earnings and suggested that government ought to create adequate educational policy to avail the human capital need of the populace for economic prosperity. Harpaljit, Baharom and Muzafar (2014) examined the connection between education spending and economic expansion rate in China and India by utilizing yearly data from 1970 to 2005. This investigation used multi econometric methods including co-integration test, BLUE-OLS estimator, and VECM. The result uncovered that there is a long-term nexus between earnings (income) level, Gross Domestic Product per capital and education spending in both China and India. Also, a unidirectional causal relationship was obtained for the two nations, running from earnings (income) level to education spending for China, while for India, education spending Granger causes the level of earnings. Urhie (2014) analyzed the impacts of the components of public education spending on both educational achievement and Nigeria&rsquo;s economic expansion rate from 1970 to 2010. The investigation utilized Two Stage Least Squares estimation procedure to analyze the hypotheses. The result uncovered that both capital and recurrent spending on education affect education achievement and economic expansion rate differently. Recurrent spending negatively affected education while capital spending was found to have a positive effect. Conversely, recurrent education had a positive and notable effect on economic expansion while capital spending had a negative effect. Chude and Chude (2013) explored the impacts of public education spending on Nigeria&rsquo;s economic expansion over a time frame from 1977 to 2012, with particular focus on disaggregated and sectorial spending analysis. Error correction model (ECM) was utilized. The result uncovered that over the long-term, aggregated education spending is significant and has a positive relationship on economic expansion. Abdul (2013), analyzed Education and Economic expansion in Malaysia given the fact human capital or education has is now one of the focal issues in the research of economic advancement. The researcher contended that the current studies showed that human capital, particularly education, is a significant ingredient of economic expansion. Thus the researcher investigated the issue of Malaysia education data. Notwithstanding a few issues and data quality issues, Malaysian education datasets are heavily correlated for both secondary and tertiary education. The researcher further tests the impact of various datasets on education and economic expansion relationship. The results were fundamentally the same thereby indicating that Malaysian education datasets are not unreliable. The results were econometrically consistent irrespective of measure of education utilized. All datasets lead to the same conclusion; education is inversely associated with economic expansion. Alvina and Muhammad (2013), inspected the long-term connection between government education spending and economic expansion. The investigation utilized heterogeneous panel data analysis. Panel unit root test are applied for checking stationarity. The single equation approach of panel co-integration (Kao, 1999); Pedroni&#39;s Residual-Based Panel of co-integration Test (1997, 1999) was applied to ascertain the presence of long-term connection between public education spending and gross domestic production. Finally Panel fully modified OLS result uncovered that the effect of government public education on economic expansion is more prominent in developing nations as contrasted with the developed nations, which confirmed the &quot;catching-up effect&quot; in developing nations. Mehmet and Sevgi (2013), inspected the nexus between education spending and economic expansion in Turkey. The examination utilized econometric method as the principal investigation instrument. The result uncovered a positive connection between education spending and economic expansion in the Turkish economy for the period 1970-2012. Implying that, education spending in Turkey positively affected economic expansion. Edame (2014) researched the determinants of Nigeria&rsquo;s public infrastructure spending, utilizing ECM. He found that pace (rate) of urbanization, government income, population density, external reserves, and kind of government collectively or independently impact on public spending on infrastructure. Aregbeyen and Akpan (2013) examined the long-term determinants of Nigeria&rsquo;s government spending, utilizing a disaggregated approach. In their examination, they found that foreign aid is significantly and positively influencing recurrent spending to the detriment of capital spending; that income (revenue) is likewise positively influencing government spending; that trade transparency (openness) is adversely impacting government spending; that debt service obligation diminishes all parts of government spending over the long-term; that the higher the size of the urban population, the higher would be government recurrent spending on economic services; solid proof that Federal government spending is one-sided with regards to recurrent spending, which increments substantially during election times. In likewise manner, Adebayo et al. (2014) researched the effect of public spending on industrial expansion of Nigeria through co-integration and causality and discovered that public spending on administration, economic services, and transfers remained negatively related with industrial expansion while government spending on social services remained positively related in the long-term. They concluded in this manner that there is no crowding-out impact. From these studies reviewed, there is proof that all the investigations joined economic, social, and political determinants of government spending. Srinivasan (2013), analyzed the causal nexus between public spending and economic expansion in India utilizing co-integration approach and error correction model from 1973 to 2012. The co-integration test result uncovered the presence of a long-term equilibrium connection between public spending and economic expansion. The error correction model estimate indicated unilateral causality which runs from economic expansion to public spending in the short-term and long-term. Mohd and Fidlizan (2012), narrowed down on the long-term relationship and causality between government spending in education and economic expansion in Malaysian economy from 1970-2010. The investigation utilized Vector Auto Regression (VAR). The result indicated that economic expansion co-integrated with fixed capital formation (CAP), labour force participation (LAB) and government spending on education (EDU). The Granger cause for education variable and vice versa. In addition, the investigation demonstrated that human capital like education variable goes a long way in affecting economic expansion. Consensus from the above investigations demonstrates that government spending impacts positively on economic expansion. Notable theories that support this case include; Keynes, Wagner, Peacock and Wiseman. Keynes, in his hypothesis draws a connection between public spending and economic expansion and infers that causality runs from public spending to income, meaning that public sector spending is an exogenous factor and public instrument for expanding national income. Again, it holds that expansion in government spending prompts higher economic expansion. Wagner, Peacock and Wiseman and numerous economists have developed various theories on public spending and economic expansion. Wagner positioned public sector spending as a behavioral variable that positively indicates if an economy is prospering. Notwithstanding, the neo classical growth model created by Solow opined that the fiscal policy doesn&#39;t have any impact on the expansion of national income. These multifaceted results obtained from prior investigations show that in reality public spending and other inputs in the education system may have some innate heterogeneity, suggesting that what holds in a given area or country may not hold in another. In the light of the above, this investigation sees that it is necessary to revise the allotment of public spending on education, with regards to the type of impact this spending has on education outcomes. <strong>2.4 Theoretical Framework</strong> The endogenous growth theory has been adopted as the appropriate theoretical framework for this study. This owes much to the fact that, the theory emphasizes the critical role of human capital development, through public investments on education, as a major driver of aggregate productivity in the economy. This is also supported by the work of Ogunleye, Owolabi, Sanyaolu, and Lawal, (2017) who ascertained how economic expansion is influenced by advancement in human capital from 1981 to 2015. In this study it was discovered that economic expansion is greatly influenced by advancement in human capital. Also, economic expansion appeared to facilitated by secondary education enrolment, tertiary education enrolment, and aggregate spending on health and education by the government. <strong>2.5 Research Gap</strong> Though, so much research work has been carried out on the relationship between human capital development, Public Sector Expenditure on Education and Economic expansion in Nigeria, a lot still needed to be done to address some abnormities in these studies. Of note, is that methods adopted in most of these studies are faced with methodological limitations and policy carry-overs, not minding that no two economies are the same. This study therefore, seeks to fill these gaps created by previous researches. Importantly, time plays a vital role in research, making it imperative for continuous and up to date studies, so as to keep abreast with changes as quickly as possible. In the study carried out by Ojewumi and Oladimeji (2016), time series data covering from 1981-2013 was used, while Muhammad and Benedict (2015), used time series data from 1981-2010. These studies above used time series data of 1981 to 2013 and 1981 to 2010 respectively, while this study used updated data covering 1981-2018, thereby making the study current and up to date. &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <strong>Chapter Three</strong> <strong>Research Methods</strong> <strong>3.1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Research Design</strong> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Ex post facto research design and econometric procedures of analysis will be employed for empirical investigation. <strong>3.2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Model Specification</strong> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Here, we specify a model which captures the relationship between real gross domestic product in per capita terms and the selected education enrolment variables. &nbsp; &nbsp; (3.1) In the above model, <em>Ln</em> denotes natural log, <em>PER_RGDP</em> denotes real gross domestic product in per capita terms,<em> PER_PEE</em> denotes public expenditure on education in per capita terms, <em>PENR</em> denotes percentage of primary education enrolment from population total, <em>SENR</em> denotes percentage of secondary education enrolment from population total, and <em>TENR</em> denotes percentage of tertiary education enrolment from population total. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; For further empirical analysis we can explicitly express the above model in the form of an autoregressive distributed lag (ARDL) model: &nbsp; &nbsp; (3.2) Here, based on economic theory and intuition all of the coefficients are expected to be positive. <strong>3.3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Estimation Procedure</strong> <strong>3.3.1&nbsp;&nbsp;&nbsp; Unit Root Test with Breaks</strong> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Unlike the popularly used unit root tests (e.g. ADF and PP) which test the null of non-stationarity without accounting for possible breaks-points in data, the break-point unit root test of Peron (1989) tests the null of non-stationarity against other alternatives while accounting for a single break-point in the given data. The alternative hypotheses for this test are succinctly described in the following equations: &nbsp; &nbsp; (3.3) &nbsp; &nbsp; (3.4) &nbsp; &nbsp; (3.5) The first equation captures a break in the intercept of the data with the intercept-break dichotomous variable <em>I<sub>t</sub></em> which takes on values of 1 only when <em>t</em> surpasses the break-point <em>Br</em>; the second captures a break in the slope of the data with a regime-shift dichotomous variable <em>T<sub>t</sub>*</em> which takes on values of 1 only when <em>t </em>surpasses the break-point <em>Br</em>; and the third equation captures both effects concurrently with the &ldquo;crash&rdquo; dichotomous variable <em>D</em> which takes on values of 1 only when <em>t</em> equals <em>Br</em>+1. <strong>3.3.2&nbsp;&nbsp;&nbsp; ARDL Bounds Cointegration Approach</strong> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; The popularly-used residual-based cointegration methods may not be very useful when the time-series under consideration attain stationarity at different levels. On the other hand, in addition to being econometrically efficient for small sample cases (<em>n</em> &lt; 30), the bounds cointegration method developed by Pesaran and Shin (1999) is particularly useful for combining time-series that attain stationarity at levels and first-difference. The bounds cointegration method makes use of upper bounds and lower bounds derived from 4 pairs of critical values corresponding to 4 different levels of statistical significance: the 1% level, the 2.5% level, the 5% level, and the 10% level. The null of &ldquo;no cointegration&rdquo; is to be rejected only if the computed bounds f-statistic surpasses any of the upper bounds obtained from a chosen pair of critical values, while the alternative hypothesis of cointegration is to be rejected only if the bounds f-statistic falls below any of the lower bounds obtained from a chosen pair of critical values. Therefore, in contrast to other cointegration tests, the bounds test can be inconclusive if the bounds f-statistic neither surpasses the chosen upper bound nor falls below the chosen lower bound. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; To obtain the bounds f-test statistic, an f-test is performed jointly on all of the un-differenced explanatory variables of the &ldquo;unrestricted&rdquo; error correction model (ECM) derived from any corresponding autoregressive distributed lag (ARDL) model such as the previously specified empirical ARDL model in (3.2). This takes the general form: &nbsp; &nbsp; (3.6) where &Delta;<em>i<sub>t</sub></em> denotes the chosen endogenous variable in first difference; &Delta;<em>j<sub>t</sub></em> and &Delta;<em>k<sub>t</sub></em> denote the chosen exogenous variables in first differences;&nbsp; and <em>e<sub>t</sub></em> denotes the stochastic component. Choosing the best lag-length to be included is made possible by information criteria such as the Akaike and the Schwarz Information Criterion. In the case where the bounds cointegration test disapproves the null, a &ldquo;restricted&rdquo; version of the error correction model can be estimated along-side a long-run model to capture the relevant short-run and long-run dynamics as seen in the following expressions: &nbsp; &nbsp; (3.7) &nbsp; &nbsp; (3.8) Here, the error correction term <em><sub>t</sub></em><sub>-1</sub> is non-positive and bounded between 0 and 1 (or 0 and 100) in order to capture the short-run rate of adjustment to long run equilibrium, while the coefficients <em><sub>1</sub></em>,&hellip;,<em><sub>j</sub></em>&nbsp; in (3.7) capture the state of long-run equilibrium and are obtained from <em><sub>1</sub></em>=<em>b<sub>2</sub></em>/<em>b<sub>1</sub></em>,&hellip;, <em><sub>j</sub></em>=<em>b<sub>j</sub></em>/<em>b<sub>1</sub></em> respectively. &nbsp; &nbsp; &nbsp; <strong>3.4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Model Evaluation Tests and Techniques</strong> <strong>3.4.1&nbsp;&nbsp;&nbsp; R<sup>2</sup> and Adjusted R<sup>2</sup></strong> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; The R<sup>2</sup> and the adjusted R<sup>2</sup> both provide measures of goodness-of-fit. However, the adjusted R<sup>2</sup> is preferably used because it is robust against redundant regressors which inflate the conventional R<sup>2</sup>. They involve the following statistics: &nbsp; &nbsp; (3.9) &nbsp; &nbsp; (3.10) where <em>SS<sub>r</sub></em> denotes the sum of squares of the regression residuals, <em>SS<sub>t</sub></em> denotes the total sum of squares of the dependent variable, <em>n</em> denotes the number of observations, and <em>k</em> denotes the number of regressors (Verbeek, 2004). <strong>3.4.2&nbsp;&nbsp;&nbsp; T-Test and F-Test</strong> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; The t-test and the f-test can be utilized to evaluate hypotheses pertaining to statistical significance of the parameters in a regression. Particularly, the t-test may be applied to a single parameter while the f-test may be applied to multiple parameters. They involve the following statistics: &nbsp; &nbsp; (3.11) &nbsp; &nbsp; (3.12) where <em>a<sub>k</sub></em> denotes a single parameter-estimate, <em>se </em>denotes its standard error, <em>R<sup>2</sup></em> denotes the coefficient of determination of the regression, <em>N</em> denotes the number of observations, and <em>J</em> denotes the number of regressors. For the t-test, the statistical insignificance null hypothesis is to be rejected only if <em>t<sub>i</sub></em> exceeds its 5% critical-value, while for the f-test the joint statistical insignificance null hypothesis is to be rejected only if <em>f</em> exceeds its 5% critical-value at <em>N-J</em> and <em>J-1</em> degrees of freedom (Verbeek, 2004). <strong>3.4.3&nbsp;&nbsp;&nbsp; Residual Normality Test</strong> The Jarque-Bera test statistic (Jarque and Bera, 1987) is useful in determining whether the residuals of a regression are normally distributed. The Jarque-Bera statistic is computed as: &nbsp; &nbsp; (3.13) where <em>S</em> is the skewness, <em>K</em> is the kurtosis, and <em>N</em> is the number of observations. Under the null hypothesis of a normal distribution, the Jarque-Bera statistic is distributed as <em>X<sup>2</sup></em> with 2 degrees of freedom. Therefore, the null hypothesis is to be accepted if the absolute value of the Jarque-Bera statistic exceeds the observed value under the null hypothesis. Contrarily, the null hypothesis is to be rejected if the absolute value does not exceed the observed value. <strong>Heteroskedasticity Test</strong> The Breusch-Pagan-Godfrey test (Breusch and Pagan, 1979; Godfrey, 1978) evaluates the null hypothesis of &ldquo;no heteroskedasticity&rdquo; against the alternative hypothesis of heteroskedasticity of the form , where is a vector of independent variables. The test is performed by completing an auxiliary regression of the squared residuals from the original equation on . The explained sum of squares from this auxiliary regression is then divided by to give an LM statistic, which follows a chi square <em>X<sup>2</sup> </em>distribution with degrees of freedom equal to the number of variables in <em>Z </em>under the null hypothesis of no heteroskedasticity. Therefore, the null hypothesis is to be accepted if the LM statistic exceeds the observed value under the null hypothesis. Contrarily, the null hypothesis is to be rejected if the LM statistic does not exceed the observed value. <strong>3.4.5&nbsp;&nbsp;&nbsp; Serial Correlation Test</strong> The Godfrey (1978) Lagrange multiplier (LM) test is useful when testing for serial correlation in the residuals of a regression. The LM test statistic is computed as follows: First, assuming there is a regression equation: &nbsp; &nbsp; (3.14) where <em>&beta;</em> are the estimated coefficients and <em>&epsilon;</em> are the errors. The test statistic for the lag order <em>&rho;</em> is based on the regression for the residuals <em>&epsilon; = y - XḂ</em> which is given by: &nbsp; &nbsp; (3.15) The coefficients <em>𝛾</em> and <em>𝛼</em><em><sub>&delta; </sub></em>are expected to be statistically insignificant if the null hypothesis of &ldquo;no serial correlation&rdquo; is to be accepted. On the other hand, the null hypothesis cannot be accepted if the coefficients <em>𝛾</em> and <em>𝛼</em><em><sub>&delta; </sub></em>are found to be statistically significant. <strong>3.4.6&nbsp;&nbsp;&nbsp; Model Specification Test</strong> The Ramsey (1969) Regression Error Specification Test (RESET) is a general test for the following types of functional specification errors: Omitted variables; some relevant explanatory variables are not included. Incorrect functional form; some of the dependent and independent variables should be transformed to logs, powers, etc. Correlation between the independent variables and the residuals. Ramsey (1969) showed that these specification errors produce a non-zero mean vector for the residuals. Therefore, the null and alternative hypotheses of the RESET test are: &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; (3.16) The RESET test is based on an augmented regression which is given as: &nbsp; &nbsp; (3.17) The test of the null hypothesis of a well-specified model is tested against the alternative hypothesis of a poorly specified model by evaluating the restriction <em>𝛾</em><em> = 0</em>. The null hypothesis is to be accepted if <em>𝛾</em><em> = 0</em>, whereas the null hypothesis is to be rejected if <em>𝛾</em><em> &ne; 0</em>. The crucial factor to be considered in constructing the augmented regression model is determining which variable should constitute the <em>Z</em> variable. If <em>Z</em> is an omitted variable, then the test of <em>𝛾</em><em> = 0</em> is simply the omitted variables test. But if <em>y</em> is wrongly specified as an additive relation instead of a multiplicative relation such as <em>y =</em><em>𝛽</em><em><sub>0</sub></em> X<sup>𝛽</sup><sup>1</sup>X<sup>𝛽</sup><sup>2</sup> + 𝜖 then the test of <em>𝛾</em><em> = 0 </em>is a functional form specification test. In the latter case the restriction <em>𝛾</em><em> = 0 </em>is tested by including powers of the predicted values of the dependent variables in <em>Z</em> such that . <strong>3.4.7&nbsp;&nbsp;&nbsp; CUSUMSQ Stability Test</strong> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; For the test of stability, cumulative sum of recursive residuals (CUSUM) and cumulative sum of recursive residuals squares (CUSUMSQ) tests as proposed by Brown, Durbin, and Evans (1975) was employed. The technique is appropriate for time series data and is recommended for use if one is uncertain about when a structural change might have taken place. The null hypothesis is that the coefficient vector &szlig; is the same every period. The CUSUM test is based on the cumulated sum of the residuals: &nbsp; &nbsp; &nbsp; (3.18) where &nbsp; &nbsp; (3.19) and &nbsp; &nbsp; (3.20) <strong>3.5&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Sources of Data</strong> The Central bank of Nigeria served as the main source of data collection. This implies also that the study adopted secondary data. <strong>Chapter Four</strong> <strong>Empirical Results</strong> <strong>4.1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Descriptive Statistics</strong> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Before going into cointegration analysis, we will attempt to briefly examine the properties of the data with descriptive statistics. Table 4.1 and Figures 4.1 to 4.5 will be acknowledged for this purpose. Table 4.1: Descriptive Statistics &nbsp; PER_RGDP PER_PEE PENR SENR TENR Mean 264316.01 635.72 23096192.94 5796345.78 787115.08 Median 232704.55 361.03 19747039.31 4410684.33 755776.70 Maximum 385349.04 2340.12 46188979.59 11840028.21 1648670.36 Minimum 199039.15 7.38 9554076.94 1846106.82 49626.49 Std. Dev. 66113.04 681.06 9425336.46 3142601.76 592505.50 Skewness 0.65 0.77 0.59 0.79 0.17 Kurtosis 1.83 2.43 2.27 2.15 1.36 Jarque-Bera 4.88 4.24 3.01 5.14 4.47 Probability 0.09 0.12 0.22 0.08 0.11 Observations 38 38 38 38 38 &nbsp; Figure 4.1: Trend of Real Gross Domestic Product (RGDP) Per Capita &nbsp; &nbsp; &nbsp; Figure 4.2: Trend of Public Expenditure on Education (PEE) Per Capita &nbsp; &nbsp; Figure 4.3: Trend of Primary School Enrolment (PENR) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Figure 4.4: Trend of Secondary School Enrolment (SENR) &nbsp; &nbsp; Figure 4.5: Trend of Tertiary School Enrolment (TENR) &nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; From the second column of Table 4.1, RGDP per capita mean is NGN 264, 316.01 ($734.21). This critically lags behind RGDP per capita mean in all developed (OECD) countries and underscores the need for human and non-human capital development. Further, RGDP per capita maximum is NGN 385, 349.04 while its minimum is NGN 199, 039.15. Given that the trend of RGDP per capita is positively sloped as seen in Figure 4.1, the disparity between RGDP per capita maximum and its minimum indicates growth in RGDP per capita during the period under investigation. Lastly, the Jarque-Bera statistic (4.88) and probability value (0.09) of RGDP per capita simply suggest that it follows a normal distribution, with NGN 66, 113.04 as its standard deviation. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; From the third column of Table 4.1, PEE per capita mean is NGN 635.72 ($1.77). Just like RGDP per capita, this critically lags behind PEE per capita mean in all developed (OECD) countries and underscores the need for more government intervention in the education sector. Further, PEE per capita maximum is NGN 2, 340.12 while its minimum is NGN 7.38. Given that the trend of PEE per capita is positively sloped exponentially as seen in Figure 4.2, the disparity between PEE per capita maximum and its minimum indicates rapid growth in PEE per capita during the period under investigation. Lastly, the Jarque-Bera statistic (4.24) and probability value (0.12) of PEE per capita simply suggest that it follows a normal distribution, with NGN 681.06 as its standard deviation. From the fourth column of Table 4.1, PENR mean is 23096192.94. This represents about 18.33% of total population mean (126036036.63) and indicates high primary school enrolment during the period under investigation. Further, PENR maximum is 46188979.59 while its minimum is 9554076.94. Given that the trend of PENR is positively sloped linearly as seen in Figure 4.3, the disparity between PENR maximum and its minimum indicates consistent growth in PENR during the period under investigation. Lastly, the Jarque-Bera statistic (3.01) and probability value (0.22) of PENR simply suggest that it follows a normal distribution, with 9425336.46 as its standard deviation. From the fifth column of Table 4.1, SENR mean is 5796345.78. This represents about 4.60% of total population mean (126036036.63) and indicates relatively low secondary school enrolment during the period under investigation. Further, SENR maximum is 11840028.21 while its minimum is 1846106.82. Given that the trend of SENR is positively sloped exponentially as seen in Figure 4.4, the disparity between SENR maximum and its minimum indicates rapid growth in SENR during the period under investigation. Lastly, the Jarque-Bera statistic (5.14) and probability value (0.08) of SENR simply suggest that it follows a normal distribution, with 3142601.76 as its standard deviation. From the sixth column of Table 4.1, TENR mean is 787115.08. This represents about 0.63% of total population mean (126036036.63) and indicates very low tertiary school enrolment during the period under investigation. Further, TENR maximum is 1648670.36 while its minimum is 49626.49. Given that the trend of TENR is positively sloped concavely as seen in Figure 4.5, the disparity between TENR maximum and its minimum indicates slow growth in TENR during the period under investigation. Lastly, the Jarque-Bera statistic (4.47) and probability value (0.11) of TENR simply suggest that it follows a normal distribution, with 592505.50 as its standard deviation. From the descriptive statistics above, it is obvious that substantial disparities exist between the maximum and minimum values of the variables, especially for PEE per capita and TENR. This may distort the regression results of the cointegration analysis and may also lead to unnecessarily large regression coefficients. In order to avoid these problems, we have transformed the variables in two major ways. Firstly, we have reduced disparity among the variables by expressing PENR, SENR, and TENR as percentages of population total. Secondly, we have downsized all the variables to a smaller scale by expressing them in natural log form. Therefore instead of RGDP, PEE per capita, PENR, SENR, and TENR, we now have Ln_PER_RGDP, Ln_PER_PEE, Ln_PENR, Ln_SENR, and Ln_TENR respectively as our investigative variables. &nbsp; &nbsp; &nbsp; <strong>4.2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Break-Point Unit Root Test Results</strong> Table 4.2: Break-Point Unit Root Test Result Summary Variables Lags Included Specification Break Date ADF Test Statistic 5% Critical Value Summary <em>Ln_PER_RGDP</em><em><sub> t</sub></em> 0 Intercept &amp; Trend 2001 -3.3506 -5.1757 Non-Stationary <em>∆Ln_PER_RGDP</em><em><sub> t</sub></em> 2 Intercept &amp; Trend 2001 -5.4176 -5.1757 Stationary <em>Ln_PER_PEE</em><em><sub> t</sub></em> 0 Intercept &amp; Trend 2004 -3.3665 -5.1757 Non-Stationary <em>∆Ln_PER_PEE</em><em><sub> t</sub></em> 5 Intercept &amp; Trend 1995 -5.6226 -5.1757 Non-Stationary <em>Ln_PENR</em><em><sub> t</sub></em> 7 Intercept &amp; Trend 2004 -7.6901 -5.1757 Stationary <em>Ln_SENR</em><em><sub> t</sub></em> 3 Intercept &amp; Trend 1998 -5.0584 -5.1757 Non-Stationary <em>∆Ln_SENR</em><em><sub> t</sub></em> 3 Intercept &amp; Trend 2016 -6.4199 -5.1757 Stationary <em>Ln_TENR</em><em><sub> t</sub></em> 1 Intercept &amp; Trend 1998 -6.9768 -5.1757 Stationary Note(s): Lag selection based on Schwarz Information Criterion (SIC) &nbsp; As seen in the above table, there are different orders of integration for the time-series variables. Specifically, <em>Ln_PENR</em> and <em>Ln_TENR</em> are stationary at levels, while others are stationary only at the first difference. The bounds cointegration method is more appropriate in this case because it permits the combination of stationary and difference-stationary time series. <strong>4.3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ARDL Bounds Cointegration Test Results</strong> Table 4.3: Lag/Model Selection Criteria Table Number of Models Evaluated: 16 Dependent Variable: <em>Ln_PER_RGDP</em> S|N Model AIC Specification 1 16 -4.0889 ARDL(1, 0, 0, 0, 0) 2 15 -4.0552 ARDL(1, 0, 0, 0, 1) 3 12 -4.0477 ARDL(1, 0, 1, 0, 0) 4 14 -4.0448 ARDL(1, 0, 0, 1, 0) 5 8 -4.0445 ARDL(1, 1, 0, 0, 0) 6 11 -4.0212 ARDL(1, 0, 1, 0, 1) 7 13 -4.0121 ARDL(1, 0, 0, 1, 1) 8 10 -4.0118 ARDL(1, 0, 1, 1, 0) 9 7 -4.0066 ARDL(1, 1, 0, 0, 1) 10 6 -3.9994 ARDL(1, 1, 0, 1, 0) 11 4 -3.9970 ARDL(1, 1, 1, 0, 0) 12 9 -3.9894 ARDL(1, 0, 1, 1, 1) 13 3 -3.9672 ARDL(1, 1, 1, 0, 1) 14 5 -3.9626 ARDL(1, 1, 0, 1, 1) 15 2 -3.9589 ARDL(1, 1, 1, 1, 0) 16 1 -3.9357 ARDL(1, 1, 1, 1, 1) Note(s): * indicates chosen optimal lag specification based on the Akaike Information Criterion The Akaike criterion shows that ARDL(1, 0, 0, 0, 0) is the best lag specification for the ARDL model, thereby indicating that it is best to include only a single lag of the endogenous variable (<em>Ln_PER_RGDP</em>), and 0 lags of the other exogenous variables (<em>Ln_PER_PEE, Ln_PENR, Ln_SENR, </em>and <em>Ln_TENR</em>). On this basis, an ARDL model was estimated and the bounds cointegration method was applied to test for cointegration as seen in the following tables. Table 4.4: Auto Regressive Distributed Lag (ARDL) Model Estimates Dependent Variable: <em>Ln_PER_RGDP</em><em><sub> t</sub></em> Regressors Coefficient Standard Error t-statistic Prob. <em>Ln_PER_RGDP <sub>t-1</sub></em> 0.723844 0.063884 11.33053 0.0000 <em>Ln_PER_PEE</em><em><sub> t</sub></em> 0.006558 0.014438 0.454194 0.6529 <em>Ln_PENR</em><em><sub>t</sub></em> 0.166945 0.048731 3.425881 0.0017 <em>Ln_SENR<sub>t</sub></em> 0.105751 0.044395 2.382033 0.0235 <em>Ln_TENR</em><em><sub>t</sub></em> 0.033421 0.036354 0.919326 0.3650 <em>C</em> 2.80666 0.598588 4.688802 0.0001 &nbsp; Table 4.5: Bounds Cointegration Test &nbsp; Computed Wald (F-Statistic): 8.5420 10% level 5% level 2.5% level 1% level <em>k </em>= 4 I(0) I(1) I(0) I(1) I(0) I(1) I(0) I(1) <em>F</em>* 2.45 3.52 2.86 4.01 3.25 4.49 3.74 5.06 Source: Pesaran et al. <em>k</em> signifies the number of regressors <em>F</em>* corresponds to the model with unrestricted intercept and trend In the above table, the bounds test statistic (8.5420) surpasses the upper-bound (4.01) at the 5% level of significance and therefore leads to the rejection of the null hypothesis of &ldquo;no cointegration&rdquo;. Based on this result, a &ldquo;restricted&rdquo; error correction model was estimated as well as a long-run &lsquo;equilibrium&rsquo; model as seen in the subsequent tables and equations. Table 4.6a: Error Correction Model Dependent Variable: &Delta;<em> Ln_PER_RGDP</em><em><sub> t</sub></em> Regressors Coefficient Standard Error t-statistic Prob. <em>∆Ln_PER_PEE</em><em><sub> t</sub></em> 0.0065 0.0144 0.4541 0.6529 <em>∆Ln_PENR</em><em><sub> t</sub></em> 0.1669 0.0487 3.4258 0.0017 <em>∆Ln_SENR</em><em><sub> t</sub></em> 0.1057 0.0443 2.3820 0.0235 <em>∆Ln_TENR</em><em><sub> t</sub></em> 0.0334 0.0363 0.9193 0.3650 <em>ECT <sub>t-1</sub></em> -0.2761 0.0638 -4.3227 0.0001 &nbsp; Table 4.6b: Long-Run Model Dependent Variable: <em>Ln_PER_RGDP</em><em><sub> t</sub></em> Regressors Coefficient Standard Error t-statistic Prob. <em>Ln_PER_PEE</em><em><sub> t</sub></em> 0.0237 0.0489 0.4850 0.6310 <em>Ln_PENR</em><em><sub> t</sub></em> 0.6045 0.1253 4.8213 0.0000 <em>Ln_SENR</em><em><sub> t</sub></em> 0.3829 0.1106 3.4602 0.0016 <em>Ln_TENR</em><em><sub> t</sub></em> 0.1210 0.1500 0.8064 0.4261 <em>C</em> 10.1633 0.5757 17.6524 0.0000 &nbsp; In the error correction model, the error correction term (<em>ECT<sub>t-1</sub></em>) is expectedly negative and statistically significant at the 5% level (based on its <em>p</em>-value (0.0001)). Its magnitude (-0.2761) indicates a low but significant rate of adjustment to long-run equilibrium and specifically implies that approximately 27.61% of all discrepancies in long-run equilibrium will be corrected in each period. On the other hand, in the long-run model, the first long-run coefficient (<em>Ln_PER_PEE</em><em><sub> t</sub></em>) is expectedly positive but its <em>p</em>-value (0.6310) indicates that it is statistically insignificant at the 5% level of significance, thereby implying that increment in <em>Ln_PER_PEE</em> will not cause <em>Ln_PER_RGDP</em> to increase. . Similarly, the fourth long-run coefficient (<em>Ln_TENR</em>) is expectedly positive but its <em>p</em>-value (0.4261) indicates that it is statistically insignificant at the 5% level of significance, thereby implying that increment in <em>Ln_TENR</em> will not cause <em>Ln_PER_RGDP</em> to increase. On the other hand, the second long-run coefficient (<em>Ln_PENR</em>) is expectedly positive and its <em>p</em>-value (0.0000) indicates that it is statistically significant at the 5% level of significance, thereby implying that increment in <em>Ln_PENR</em> will cause <em>Ln_PER_RGDP</em> to increase by 0.6045. Similarly, the third long-run coefficient (<em>Ln_SENR</em>) is expectedly positive and its <em>p</em>-value (0.0016) indicates that it is statistically significant at the 5% level of significance, thereby implying that increment in <em>Ln_SENR</em> will cause <em>Ln_PER_RGDP</em> to increase by 0.3829. The intercept also appears to be positive and statistically significant thereby indicating that the long-run model has a positive autonomous component measuring up to 10.1633 units. <strong>4.4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Model Evaluation Results</strong> <strong>4.4.1&nbsp;&nbsp;&nbsp; Test of Goodness-of-Fit</strong> Table 4.7: Test of Goodness-of-Fit Summary Model R<sup>2</sup> Adj. R<sup>2</sup> ARDL Model 0.9875 0.9854 ECM 0.6948 0.6567 &nbsp; The adjusted R<sup>2</sup> of the ARDL model has a magnitude of 0.9854 and therefore implies that the ARDL model explains as much as 98.54% of the variation in its endogenous variable. Further, the adjusted R<sup>2</sup> of the ECM has a magnitude of 0.6567 and therefore implies that the error correction model (ECM) explains as much as 65.67% of the variation in its endogenous variable. <strong>4.4.2&nbsp;&nbsp;&nbsp; T-Test and F-Test</strong> Table 4.8: F-Test Summary Model F-Statistic 5% Critical Value Prob. Remarks ARDL Model 490.1238 F(5,31) = 2.52 0.0000 Jointly Significant @ 5% ECM 15.2700 F(4,32) = 2.67 0.0000 Jointly Significant @ 5% &nbsp; The F-statistic (490.1238) for the ARDL model exceeds its 5% critical value (2.66), thereby implying that the parameters of the ARDL model are jointly significant at the 5% level of significance. Further, the F-statistic (15.2700) of the ECM also exceeds its 5% critical value (2.84), thereby implying that the parameters of the error correction model (ECM) are jointly significant at the 5% level of significance. Table 4.9: T-Test Summary T-Test for the Long-Run Estimates Regressors t-statistic 5% Critical Value Remarks <em>Ln_PER_PEE</em><em><sub> t</sub></em> 0.4850 1.9600 Insignificant <em>Ln_PENR</em><em><sub> t</sub></em> 4.8213 1.9600 Significant <em>Ln_SENR</em><em><sub> t</sub></em> 3.4602 1.9600 Significant <em>Ln_TENR</em><em><sub> t</sub></em> 0.8064 1.9600 Insignificant <em>C</em> 17.6524 1.9600 Significant &nbsp; T-Test for the Error Correction Model (ECM) Estimates &nbsp; Regressors t-statistic 5% Critical Value Remarks &nbsp; <em>∆Ln_PER_PEE</em><em><sub> t</sub></em> 0.4541 1.9600 Insignificant &nbsp; <em>∆Ln_PENR</em><em><sub> t</sub></em> 3.4258 1.9600 Significant &nbsp; <em>∆Ln_SENR</em><em><sub> t</sub></em> 2.3820 1.9600 Significant &nbsp; <em>∆Ln_TENR</em><em><sub> t</sub></em> 0.9193 1.9600 Insignificant &nbsp; <em>ECT <sub>t-1</sub></em> -4.3227 1.9600 Significant &nbsp; &nbsp; In the long-run model, the t-statistics for the first and fourth parameters are less than the 5% critical value (1.96), thereby indicating that the first and fourth parameters are statistically insignificant at the 5% level of significance, while the t-statistic for the second, third, and fifth parameters are greater than the 5% critical value (1.96), thereby indicating that they are statistically significant at the 5% level of significance. Similarly, in the ECM, the t-statistics for the first and fourth parameters are less than the 5% critical value (1.96), thereby indicating that the first and fourth parameters are statistically insignificant at the 5% level of significance, while the t-statistic for the second, third, and fifth parameters are greater than the 5% critical value (1.96), thereby indicating that they are statistically significant at the 5% level of significance. <strong>Normality Test</strong> Table 4.10: Jarque-Bera Normality Test Summary Model Skewness Kurtosis JB Statistic Prob. ARDL Model -0.5558 2.8731 1.9297 0.3810 ECM -0.7369 2.9430 3.3544 0.1868 &nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; In the ARDL model, the <em>p</em>-value (0.3810) of the J-B test exceeds the 0.05 benchmark, and therefore indicates that the residuals of the ARDL model are normally distributed. Further, in the ECM, the <em>p</em>-value (0.1868) of the J-B test also exceeds the 0.05 benchmark, and therefore indicates that the residuals of the error correction model (ECM) are normally distributed. <strong>4.4.4&nbsp;&nbsp;&nbsp; Heteroskedasticity Test</strong> Table 4.11: Breusch-Pagan-Godfrey Heteroskedasticity Test Summary Model BPG Statistic (Obs*R-sq) Prob. ARDL Model 4.3085 0.5059 ECM 7.2979 0.1209 &nbsp; In the ARDL model, the <em>p</em>-value (0.5059) of the BPG test exceeds the 0.05 benchmark, and therefore indicates that the residuals of the ARDL model are homoskedastic. Similarly, in ECM, the <em>p</em>-value (0.1209) of the BPG test also exceeds the 0.05 benchmark, and therefore indicates that the residuals of the error correction model (ECM) are homoskedastic. <strong>4.4.5&nbsp;&nbsp;&nbsp; Autocorrelation Test</strong> Table 4.12: Breusch-Godfrey Serial Correlation Test Summary Model BG Statistic (Obs*R-sq) Prob. ARDL Model 0.1021 0.7493 ECM 0.8776 0.3488 &nbsp; In the ARDL model, the <em>p</em>-value (0.7493) of the BG test exceeds the 0.05 benchmark, and therefore indicates that the residuals of the ARDL model are not serially correlated. Similarly, in ECM, the <em>p</em>-value (0.3488) of the BG test also exceeds the 0.05 benchmark, and therefore indicates that the residuals of the error correction model (ECM) are not serially correlated. <strong>4.4.6&nbsp;&nbsp;&nbsp; Functional Specification Test</strong> Table 4.13: RESET Model Specification Test Summary Model Test Statistics Value Degrees of Freedom Prob. ARDL Model t-statistic 0.805722 30 0.4267 F-statistic 0.649189 (1, 30) 0.4267 ECM t-statistic 0.533837 31 0.5973 F-statistic 0.284982 (1, 31) 0.5973 &nbsp; In the ARDL model, the F-statistic <em>p</em>-value (0.4267) of the RESET test exceeds the 0.05 benchmark, and therefore indicates that the ARDL model was adequately specified. Further, in the ECM, the F-statistic <em>p</em>-value (0.5973) of the RESET test exceeds the 0.05 benchmark, and therefore indicates that the error correction model (ECM) was adequately specified. <strong>4.4.7&nbsp;&nbsp;&nbsp; CUSUMSQ Stability Test</strong> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; The Cumulative Sum of Residuals (CUSUM) Squares test was used to examine the stability of the ARDL model. The result is captured in the following figure. Figure 4.6: CUSUMSQ Plot &nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; In interpreting the CUSUMSQ test, we may conclude that there is instability only if the CUSUMSQ plot falls outside the boundaries of the upper and lower dotted lines which signify the &ldquo;5% level of significance&rdquo;. In this regard, the plot of the CUSUMSQ test in the above figure shows that the ARDL model becomes momentarily unstable in year 2002. However, apart from 2002, the ARDL model appears to be stable in every other year as indicated by the confinement of the CUSUMSQ plot between the upper and lower dotted lines. Overall, considering the fact that this momentary period of instability does not coincide with any major event in Nigeria&rsquo;s education sector, we can conclude that instability is due to chance, and that the estimates of the model are reliable because apart from year 2002 the ARDL model appears to be stable.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Multiple Single Input Change Vector (MSIC)"

1

Vasanthanayaki, C., A. Azhagu Jaisudhan Pazhani, and Jincy Johnson. "VLSI Implementation of Low Power Multiple Single Input Change (MSIC) Test Pattern Generation for BIST Scheme." In 2014 Fifth International Symposium on Electronic System Design (ISED). IEEE, 2014. http://dx.doi.org/10.1109/ised.2014.45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kumar, V. Selva, and J. Mohan. "Multiple single input change test vector for BIST schemes." In 2014 International Conference on Green Computing Communication and Electrical Engineering (ICGCCEE). IEEE, 2014. http://dx.doi.org/10.1109/icgccee.2014.6922320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography