Explore 1+ Business Management Homework Help Solutions

img

Question Number: 374

I.Introduction1.Economics, Econometrics, and Statistics;2.Business Forecasts: to apply economic theory and econometric models to solve business problems-forecasting future business such as future demand for its products/service, future sales or profits etc.3.Such forecasts are based on the theory, past and available data and appropriate economic/econometric models;II.Econometrics and Econometric ModelsIII.Mathematics for Economics and Business 1.Vectors and Matrixes;2.Algebra;3.Calculus and derivationsIV.Data Descriptions and Collections 1.Data StructuresData Set and VariablesA data set—consists of some basic measurement or measurements of individual items or things called elementary units;A variable—a piece of information recorded for every item;Example 1: A data set includes 50 states’ population, economic growth and inflation. Then, the elementary unit is State, Variables are Population, Economic Growth Rate and Inflation Rate;Example 2: A data set includes Internet services companies’ information of the number of users, revenues, and the user growth rate. Then, the elementary unit is Company (Internet Services Company) and variables are The Number of Users, Revenues and User Growth Rate;Example 3: A data set includes employees’ Gender, Salary, Education and Years of Working Experience. Then, the elementary unit is Employee, and variables are Gender, Salary, Education and Years of Working Experience.Data Classification2.1. Classification according to the number of variables:a. Univariate data (one variable)b. Bivariate data (two variables) – need to consider the relationship between two variables;c. Multivariate data (three or more variable);2.2. classification in terms of measurement (numbers or categories);a. Discrete quantitative data – values from a list of specific numbers;b. Continuous quantitative data such as price, growth rate;c. Qualitative data (non-numerical):Ordinal qualitative data (meaningful ordering such as ratings A, B, C, D);Nominal qualitative data (no meaningful ordering);2.3. Classification in terms of time series or cross-sectional:a. Time series data;b. Cross-sectional data;c. Panel data – both time series and cross sectional;Data SamplingRandom sampling;Large data;Repeated and non-repeated sampling;Data distributions and sampling;Problems associated with data sets3.1. Missing observations;3.2. Multicollinerity;3.3. Grouped data;3.4. Measurement errorsV.Data AnalysesHistogram;Data Summaries;Variability;VI.Probability and Statistics 1.ProbabilityThe degree of the risk is mainly related to the probability that risk occurs as well as the loss. The expected loss or gain also involves the probability directly.Probability is the likelihood or chances for each of the future events;Statistics is to collect data (information), interpret and understand the data.Probability1.The random experiment: well defined procedure that produces an observable outcome that could not be perfectly predicted in advance.Example 1. Poll; Example 2. Consumer Satisfaction Survey (Best Automobiles in 1997);Example 3. Flip a coin; Example 4. Toss a die (dice).Example 5. MBA Exit Survey; Example 6. Teaching Evaluation.1.1. The sample space: each random experiment has a sample space that lists all possible outcome of the random experiment, prepared in advance without knowledge of what will happen when the experiment is run.Examples: the samples spaces for the above random experiments.1.2. Events: collection of outcomes specified in advance before the random experiment is run (interested part of the sample space).Example 1. A = {Head when a coin is flipped};Example 2. B = {Even numbers when a die is tossed}.2. Probability. Every event has a number between 0 and 1, called probability that indicates how likely the event will happen each time the random experiment is run.2.1. OddsOdds = Probability that the event happens / Probability that the event does not happen.Examples. Odds for Event A and Event B defined above.2.2. Relative frequency. Number of times the event occurs / Number of times the random experiment is run.Examples: Flipping coin or tossing the die.2.3. The law of large numbers and probability. The relative frequency will be close to the probability if the experiment is run many times.2.4. Theoretical probability and the equally likely rule.If all outcomes are equally likely, then the probability of an event isNumber of outcomes in the event / Total number of possible outcomes.2.5. Subjective probability. a person’s opinion of what the probability is for an event.Examples. Insurance, International Investment, and New Product Development.2.6. Bayesian analysis. There are prior information (probabilities) available.3. Events and combinations3.1. Notations:S = Sample Space; Ø = Empty SpaceNot A = (Complement); Example 1: A = {Even Numbers}; then = {Odd Numbers}Not B = (Complement); Example 2: B = {Odd Numbers}; then = {Even Numbers}A and B = A∩B (intersection); A∩B = {Empty} = ØA or B = A∪B (Union); A∪B = {all numbers} = {1, 2, 3, 4, 5, 6}.Graphs.3.2. Probability.Probability ( ) = 1- Probability (A); and Probability (A) = ?;Probability (A∪B) = Probability (A) + Probability (B) – Probability (A∩B)Probability (A∩B) = ?Probability (S) = ?Probability (Ø) = ?3.3. Mutually exclusive events: Event A and B are called mutually exclusive if Probability (A∪B) = Probability (A) + Probability (B); orProbability (A∩B) = 0;3.4. Conditional Probability of A given B:Probability (A│B ) = Probability (A∩B) / Probability (B);Conditional probability of B give AProbability (B│A ) = Probability (A∩B) / Probability (A);As a result, we also have the followings:Probability(A∩B) = Probability(B)*Probability(A│B ) = Probability (A) Probability(B│A );3.5. Independent Events: Events A and B are called independent ifProbability (A) = Probability (A│B );Are these true if A and B are independent?(1)Probability (A∩B) = Probability (A) * Probability (B)(2)Probability (B) = Probability (B│A );What are relationships between independent and mutual exclusive events?There are big differences, unlessone event has the zero probability.Case 1. An event with a small probability.Suppose that the probability of event A occurring is very small. Assume that the experiment is repeated n times independently. What is the probability of Event A occurring at least once in n trials?Prob. (AT least one A in n trials) = 1- Prob. (no A in n trials)= 1- Prob. ( )= 1- Prob. = 1- {1- Prob. (A)} ≈1 when n is very largeImplications: a small probability event will eventually happen if repeated many times!Case 2. Draw Cards.Suppose that you draw 4 cards,(1)what is the probability that all cards are the hearts?The answer is (13 / 52)*(12 / 51)*(11 / 50)*(10 / 49);(2)what is the probability that all cards are jacks?The answer is (4 / 52)(3 / 51)(2 / 50)(1 / 49)(3)Now suppose that you first draw a card and that is not a heart, then you draw three more cards, what is the probability that your last three cards are jacks?The answer is (4 / 51)(3/ 50)(2 / 49);Case 3. Product Purchase Survey100 people are surveyed regarding purchasing of a new product. 50 of them are women and the other are men. 30 of 50 women purchased the product and 20 of 50 men purchased the product:(1)Given the fact that the product is purchased by these 100 people, what is the probability that it is purchased by a man?(2)What is the probability that is purchased by a woman?Purchased (P)Not Purchased (NP)TotalMen (M)203050Women (W)302050Total5050100Prob. (P) = Prob. (M) Prob. (P│M ) + Prob. (W) Prob. (P│W ) = .5* .4+ .5* .6 = .5Prob. (M│P ) = Prob. (M and P) / Prob. (P) = Prob. (M) Prob. (P│M ) / Prob. (P) = .5* .4 / .5 = .4Case 4. Probability Tree of Drug TestingT – Test is positive, NT – The test is not positive or negative;D – An employee uses drugs, ND – Employee does not use the drugs;You are given (1) Prob. (T│D ) = .90; (2) Prob. (NT │ND ) = .95; (3) Prob. (D) = .08;Then, you know immediately—(a)Prob. (ND) = .92, (b) Prob. (NT│D ) = .10, (c) Prob. (T│ND ) = .05;Furthermore, you have—(d) Prob. (D and T) = Prob. (D) Prob. (T│D ) = .08* .90 = .072;(e) Prob. (D and NT) = Prob. (D) Prob. (NT │D ) = .08* .10 = .008;(f) Prob. (ND and T) =Prob. (ND) Prob. (T│ND ) = .92* .05 = .046;(g) Prob. (ND and NT) = Prob. (ND) Prob. (NT│ND ) = .92* .95 = .087;(i)Prob. (T) = Prob. (D) Prob. (T│D ) + Prob. (ND) Prob. (T│ND ) = .072 + .046 = .118;(j) Prob (D│T) = Prob. (D and T) / Prob.(T) = .072 / .118 = .6102.Activities of statisticsDesigning a plan for data collection (sample survey design);Exploring the data: describing, summarizing and etc.;Estimating an unknown quantity;Testing interested hypothesis.3.Special statistical distributionsBinomial distribution;Normal distribution;Poisson distribution;Exponential distribution;VII.Times Series1. Examples of the time series data:1.1. The daily return of the stock marketRandom walk theoryY(t) = a + Y(t-1) +ε;The stock market is efficient, so the market follows a random walk and the daily change is unpredictable.1.2. Retail sales: growth and seasonal variation2. Basic methods to analyze the time-series data:2.1. Trend-seasonal analysis: direct and intuitive way to estimate the basic components of a monthly or quarterly time series.Components included:(1) the long term trend; (2) the exactly repeating seasonal patterns; (3) the medium-term, wandering, cyclic ups and downs; and (4) the random, irregular “noise”;2.2. Box-Jenkins ARIMA (AutoRegressive-Integrated-MovingAverage).3. Trend-seasonal analysis3.1. Components: Data = Trend* Seasonal* Cyclic* Irregular3.1.1. The long term trend: the very long-term behavior of the time series, typically as a straight line or an exponential curve;3.1.2. The exactly repeating seasonal components: indicates the effects of the time of year, such as heating demand, sales in December, and agricultural goods’ sales at harvest time;3.1.3. the medium-term cyclic components: the gradual ups and downs that don’t repeat each year and so excluded from the seasonal components; also they are not random enough to be included in the irregular components;3.1.4. The short-term and random irregular components;3.2. The ratio-to-moving-averaging method:(1) A moving average is used to eliminate the seasonal effects by averaging over the entire year, reducing the irregular components and producing a combination of trend and cyclic components;(2) Dividing the series by the smoothed moving-average series and that will give you the ratio-to-moving-averaging, which included both seasonal and irregular value. Grouping by time of year and then averaging within groups, you find the seasonal index for each time of year. Dividing each series value by the appropriate seasonal index for its time of year, you find seasonally adjusted values;(3) A regression of the seasonally adjusted series (Y) on time (X) is used to estimate the long-term trend as a straight line over time. This trend has no seasonal variation and leads to a seasonally adjusted forecast;(4) Forecasting may be done by the seasonalizing the trend. Taking predicated values from the regression equation (the trend) for future time periods and then multiplying by the appropriate seasonal index, you get forecasts that reflex both the long term trend and the seasonal behavior. 2.3. Example YearQRevenue(R MA of R Data)Data/MAS-IndexS-adjustedTimeAdj.-FForecast20101917071.2952708051720839336320102630480.872772245273516641582010357041732520.778700.740477043374950554912010478667753751.043681.091472077476383833662011196794768311.259831.2952747325778161007882011274947781070.959540.872785879679249691612011356791818470.693870.740476705780682597362011489127839101.062171.0914816618821158962320121116250838951.385661.2952897549835491082132012271998854810.842270.8727825001084982741632012359620855420.696970.7404805261186415639802012498985843461.173571.09149069312878489588020131106878851031.255861.29528251813892811156382013271800852940.841790.8727822731490714791662013365880867080.759790.7404889811592147682242013494254912471.032951.091486358169358110213720141122915955771.286031.295294900179501412306320142902791003500.917580.872710551118964478416920143802411067980.751330.7404108378199788072469201441180751108801.064891.09141081842099313108393201511506821121481.343601.29521163382110074613048720152969671138340.851830.8727111112221021808917220153854921122480.761630.74041154702310361376713201541263121078371.171321.091411573124105046114650201611297621046251.240261.29521001862510647913791220162825971003380.823190.872794645261079129417520163741670.74041001742710934580957201641033401.09149468328110779120907201711.295229112212145337201820.87273011364599178201930.74043111507885202202041.091432116511127164Long-term trend = a + b * Time = 70650.07 + 1433.16 * (Time period)(1) Moving Average = Trend * Cyclic; y(t) = [y(t-2) / 2 + y(t-1) + y(t) + y(t+1) + y (t+2) / 2] /4;(2) (Seasonal)(Irregular) = Data / Moving Average;(3) Seasonal Index = Average of (Data / Moving Average) for that season;(4) Seasonal Adjusted Value = (Data / Seasonal Index) = Trend * Cyclic * Irregular;(5) Long Term Trend = a + b (Time Periods) using Seasonal Adjusted Value as Dependent;(6) Forecast = Trend * Seasonal Index, where Trend is the predicted value from the regression.3.The Box-Jenkins ARIMA ProcessesA family of linear statistical models based on the normal distribution that have flexibility to imitate the behavior of many different real time series by combining autoregressive (AR) processes, integrated (I) processes, and moving-average (MA) processes.3.1. The random noise process: no memory of the past behavior of the series.Data = Mean value + Random noise; or y(t) = μ+ε(t); the long term mean value is μ.3.2. An autoregressive (AR) process: remember where it wasData = δ+ψ*(Previous value) + Random noise;or: y(t) = δ+ψy(t-1) + ε(t); the long term mean value is δ/ (1-ψ);3.3. The moving-average process: does not remember exactly where it was, but remembers the random noise component of where it was;Data = μ + Random noise –θ*(Previous random noise);y(t) = μ+ε(t) –θ*ε(t-1); the long term mean value is μ;3.4. ARMA process: short term memoryData = δ+ψ*(Previous value) + Random noise –θ*(Previous random noise);y(t) = δ+ψy(t-1) + ε(t) –θ*ε(t-1); the long term mean value is δ/ (1-ψ);3.5. A pure integrated process: remember where it was and then moves at random;The pure integrated (Random Walk) process:Data = δ+ Previous value + Random noise;y(t) = δ+ y(t-1) + ε(t); In differentiated form: y(t) – y(t-1) =δ+ε(t);3.6. ARIMA process:Data change = δ+ψ*(Previous value) + Random noise –θ*(Previous random noise);y(t)–y(t-1) =δ+ψ(y(t-1)-y(t-2))+ε(t)–θ*ε(t-1); the long term mean value of the change in y isδ/(1-ψ).4. The choices of the orders of ARIMA4.1. based on the actual problem;4.2. testing the appropriate orders.VIII. Multiple RegressionCorrelation and Regression1. Exploring relationships using scatterplots and correlation.1.1. The scatterplot; displays each case (elementary unit) using two axes to represent the two factors. X (horizontal axis) is the result or response.Examples: study time and GPA; education and earnings; price and demand.1.2. Correlation: or called correlation coefficient, denote by r, is a pure number between –1 and 1 summarizing the strength of the relationship in the data.CorrelationInterpretation1Perfect positive relation. All data points must fall exactly on a line that titles upward to the right.0No relationship, just a random cloud tilting neither up or down toward the right.-1Perfect negative relationship. All data points must fall exactly on a line that titles downward to the right.1.3. The formula for the correlation coefficientwhere Sx and Sy are standard deviations of X and Y.Example (Problem 28 Page 428): The relationship between the earnings and stock prices (See the attachment)1.4. Covariance: the numerator of the correlation; its meaning is difficult to interpret.1.5. Outlier: A data point in a scatterplot is a bivariate outlier if it does not fit the relationship of the rest of the data.1.6. Correlation is not causation: there may have the third factor that causes the relationship! CompanyEarnings (E)Stock price (P)Dev. Square of EDev. Square of PCross devi.Genentech0.2417.880.36240436.487640253.636381Alza0.524.750.743044166.681010311.128851Amgen0.09370.204304633.050760311.372546Cetus-0.2211.380.0201640.21114025-0.065249Genetics Insti.-0.8118.750.20070447.75501025-3.095904Biogen-0.219.380.0231046.04914025-0.373844Centocor0.21170.32718426.630760252.951806Chiron-0.97150.3696649.98876025-1.921584Xoma-1150.4070449098876025-2.016399Nova Pharm.-0.325.380.00176441.72514025-0.271299Immunex0.0211.750.1459240.00801025-0.034189Collagen0.1211.380.2323240.21114025-0.221479CA Biotech-0.875.250.25806443.421510253.347466Calgene-0.666.380.08880429.806140251.626931DNA Plant Tech-0.164.630.04080451.97689025-1.456319Repligen-0.577.250.04326421.063510250.954616Imreg-0.364.54E-0653.86826025-0.014679Celgene-0.98.750.2894449.545010251.662151Cytogen-1.13.630.54464467.395890256.058611Damon Biotech-0.271.750.008464101.7980102-0.928234Mean-0.36211.83950.2269010571.455920791.70211474Standard Dev.0.4763418.4531604Correlation0.4227192. Regression: Prediction of one thing from another2.1. The variable being predicted is donated by Y (also called dependent variable), and the variable that helps with the prediction is X (also called independent variable, or explanatory variable or regressor).XYRolesPredictorPredictedIndependentDependentExplanatoryExplainedStimulusResponseExogenous (from outside)Endogenous (from inside)ExampleSalesEarningsNumber producedCostEffortResultsInvestmentOutcomeExperienceSalaryTemperature of processOutput yield2.2. Linear regression: Y = Intercept + (slope) (X) = a + bX2.3. Least-square slope and intercept: Slope = b = r (Sy / Sx)Example from the previous:Sy = 8.4531; Sx = .4763; r = .4227; = -.362; = 11.8395;So, b = r(Sy / Sx) = .4227(8.4531/.4763) = 7.501; a = - b = 11.8395 – 7.501*(-.362) = 14.554As a result, we have Y = 14.554 + 7.501XSuppose that one company has earnings of $2 per share, then its predicted stock price will bePredicted value of Y = 14.554 + 7.501*2 = 29.556(Draw a line of the regression)2.5. Residual Residual = Actual Y – Predicted Y = Y – (a + bX)3. The standard error of estimate and R23.1. The standard error: tells how large the prediction errors (residuals) are for the data set.Example: Se = 8.4531 * square root of {(1-.42272)(19 / 18)} = 8.4531*.9311 = 7.87063.2. R2: a relative measure of how much has been explained; the square of the correlation. R2 = r2Example: R2 = r2 = .42272 = .17874. Confidence interval and hypothesis tests for regression4.1. The liner model assumption defines the population.Liner model for the population:Y = α + βX +ε= (population relationship) + randomness;whereεhas a normal distribution with mean 0 an standard deviation σ.α and β are the population intercept a and slope b are estimators of α and β. And standard error of estimate Se is the estimator of σ.4.2. Standard error of the regression coefficient.Degree of freedom = n – 2Example:Sb = 7.8706 / (.4763*4.3589) = 7.8706 / 2.0761 = 3.7911 (significant ?);4.3. Standard error of the intercept term: ; Degree of the freedom = n – 2;Example:Sa = 7.8706*.2836 = 2.2317 (significant ?);4.4. Confidence intervals:From b – t Sb to b + t Sb; for the population slope β;From a – t Sa to a + t Sa; for the population slopeα;4.5. t-statistict (statistic for α) = a / Sa;t (statistic for β) = a / Sb;4.6. Testing the hypothesis4.6.1. Testing β’s significanceH0: β = 0;H1: β≠0;4.6.2. Testing α’s significanceH0: α= 0;H1: α≠0;4.6.3. Methods to test the hypothesisUsing confidence interval or t-statistic;4.6.4. One-sided test4.6.5. Other hypothesis and testsH0: β =β0;H1: β≠β0;T = (b –β0) / Sb;5. A new observation: uncertainty and the confidence intervalTwo types of uncertainty to predict value Y for given X: uncertainty from randomness of ε and from estimators.5.1. Standard error of a new observation of Y given X0 ; degree of freedom = n – 2;5.2. Confidence interval of Y given X0:From (a + bX0) – t S(Y│X0) to (a + bX0) + t S(Y│X0)5.3. The mean of Y: Uncertainty and the confidence interval.Have the same predicted value, but have different standard error and confidence interval.6. Linear Regression can be misleading6.1. Nonlinear relationship6.2. Outliers affecting regression results.6.3. Error term ε assumption (normal distribution with mean 0 and standard deviation σ).6.4. Predicting intervention from observed experience.6.5. Meaning of the intercept.6.6. A hidden third factor.

Ans:You can contact with our chat expert in order to get your answer

../

1000+

Students can't be wrong

GET BEST GRADES, SIGN IN