Blog Topics...

3D plotting (1) Academic Life (2) ACE (18) Adaptive Behavior (2) Agglomeration (1) Aggregation Problems (1) Asset Pricing (1) Asymmetric Information (2) Behavioral Economics (1) Breakfast (4) Business Cycles (8) Business Theory (4) China (1) Cities (2) Clustering (1) Collective Intelligence (1) Community Structure (1) Complex Systems (42) Computational Complexity (1) Consumption (1) Contracting (1) Credit constraints (1) Credit Cycles (6) Daydreaming (2) Decision Making (1) Deflation (1) Diffusion (2) Disequilibrium Dynamics (6) DSGE (3) Dynamic Programming (6) Dynamical Systems (9) Econometrics (2) Economic Growth (5) Economic Policy (5) Economic Theory (1) Education (4) Emacs (1) Ergodic Theory (6) Euro Zone (1) Evolutionary Biology (1) EVT (1) Externalities (1) Finance (29) Fitness (6) Game Theory (3) General Equilibrium (8) Geopolitics (1) GitHub (1) Graph of the Day (11) Greatest Hits (1) Healthcare Economics (1) Heterogenous Agent Models (2) Heteroskedasticity (1) HFT (1) Housing Market (2) Income Inequality (2) Inflation (2) Institutions (2) Interesting reading material (2) IPython (1) IS-LM (1) Jerusalem (7) Keynes (1) Kronecker Graphs (3) Krussel-Smith (1) Labor Economics (1) Leverage (2) Liquidity (11) Logistics (6) Lucas Critique (2) Machine Learning (2) Macroeconomics (45) Macroprudential Regulation (1) Mathematics (23) matplotlib (10) Mayavi (1) Micro-foundations (10) Microeconomic of Banking (1) Modeling (8) Monetary Policy (4) Mountaineering (9) MSD (1) My Daily Show (3) NASA (1) Networks (46) Non-parametric Estimation (5) NumPy (2) Old Jaffa (9) Online Gaming (1) Optimal Growth (1) Oxford (4) Pakistan (1) Pandas (8) Penn World Tables (1) Physics (2) Pigouvian taxes (1) Politics (6) Power Laws (10) Prediction Markets (1) Prices (3) Prisoner's Dilemma (2) Producer Theory (2) Python (29) Quant (4) Quote of the Day (21) Ramsey model (1) Rational Expectations (1) RBC Models (2) Research Agenda (36) Santa Fe (6) SciPy (1) Shakshuka (1) Shiller (1) Social Dynamics (1) St. Andrews (1) Statistics (1) Stocks (2) Sugarscape (2) Summer Plans (2) Systemic Risk (13) Teaching (16) Theory of the Firm (4) Trade (4) Travel (3) Unemployment (9) Value iteration (2) Visualizations (1) wbdata (2) Web 2.0 (1) Yale (1)

Thursday, December 30, 2010

Multiple equilibria and multiple steady states in macro models...

There is an important distinction in economics (that admittedly I have only recently begun to appreciate) between macroeconomic models that exhibit multiple equilibria and those that exhibit multiple steady states.  Part of the reason for my confusion (which was recently cleared up by The Economy As An Evolving Complex System (Santa Fe Institute Series)) was that I held the ridiculous notion equilibrium and steady state meant the same thing.   Alas, at least within the economic sciences this fails to be true.  Economists like to use the term "equilibrium" in many different ways...some of which are very different than the way the term is used in physical and biological sciences.  When economists use the term equilibrium in macroeconomics, they are typically talking about some type of expectations equilibria (I think...some one please correct me if I am off base with this!).  

First an example of multiple steady states.  Multiple steady states can be used as an "explanation" of why some countries are poor and some countries are rich.  There is a variant of the Solow model of economic growth with threshold non-convexities in technology that exhibits this behavior.  This dynamical system has two steady states for the value of capital stock, but the steady state that the economy converges to in the long-run depends on the initial level of capital stock...if initial capital stock is low (i.e., below the threshold) then the economy ends of poor, while if the initial capital stock is high (i.e., above the threshold) then the economy ends of rich.  Economists tend to be more comfortable with the idea of multiple steady states, than with multiple equilibria...

Now to give a concrete example of multiple equilibria, suppose that we augment the above model by giving it "microfoundations."  Basically endogenize the savings rate by allowing it to be a rational representative consumer who seeks to maximize expected utility from consumption choose a savings policy.  This savings policy can be thought of as a function that the consumer uses to choose how much to save after observing the state of the world.  If the model exhibited multiple equilibria, then there would need to exist more than one optimal savings policy that our agent could choose.  Economists tend to be less comfortable with models that exhibit multiple equilibria because such models typically abstract from expectation related coordination issues (why should it be that all agents coordinate their expectations so as to select one of the optimal savings policies over the other?  This is the equilibrium selection problem.).

Hopefully the above two paragraphs capture the distinction between multiple steady states and multiple equilibria...

One last related comment about multiple equilibria in economic models.  I can see why economists are uncomfortable with models that exhibit multiple equilibria when the model itself says nothing about how agents coordinate expectations to select one equilibrium over others.  Most researchers seem to address the equilibrium selection problem by developing newer and more sophisticated versions of equilibrium (i.e., evolutionary stable, trembling-hand, stochastic-stable, etc).  I feel like this strategy is fundamentally flawed.  How are problems of equilibrium selection solved in the real world of heterogeneous, interacting agents?  One possible answer is that such problems are solved via socio-cultural norms and institutions...

Wednesday, December 29, 2010

Irving Fisher, and the Debt-Deflation Theory...

In preparation for a talk I am giving in January on Fragile Financial Networks I have been reading papers on various versions of the "financial accelerator."  I just finished reading Irving Fisher's  classic 1933 Econometrica paper on the debt-deflation theory of depressions...I would highly recommend it! Besides the famous debt-deflation stuff, I thought Fisher's comment that new investment opportunities created by technological (or financial) innovation are a major "starter" of over-indebtedness.  Here it seems like Fisher is saying that market economies (which tend to be very good at generating technological/financial innovations) sow the seeds of their later destruction...

I am working on the slides for this presentation now, and will be sure to post them as soon as they are finished...
 

Interesting new book on the way...

I just ordered Programming Collective Intelligence: Building Smart Web 2.0 Applications from Amazon...

As a side project to continue developing my Python skills I am going to learn how to write scripts to implement prediction markets (and/or other collective intelligence type algorithms).  I would be quite keen to see if such algorithms could be used to aggregate macro-economic forecasts.  I suspect that prediction markets can be used to make macro-economic forecasts, and I would be surprised if someone was not already doing it. 

As Scott Page pointed out in his excellent book The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies (New Edition), diversity of models used by participants to make forecasts is an important part of any successful prediction market.  I suspect that maintaining this diversity would be difficult in macro-economic prediction markets simply because there is a limited number of variables used to predict something like GDP, or industrial production...      

Monday, December 27, 2010

Ergodicity and the Dobrushin Coefficient...

I am working my way through Chapter 4 of Economic Dynamics: Theory and Computation and wish to pose the following question related to theorem 4.3.18 on page 90:

Theorem 4.3.18: Let p be a stochastic kernel on some metric space S with markov operator M.  The following statements are equivalent:
  1. The dynamical system (P(S), M) is globally stable (note that P(S) is the set of probability distribution functions defined over S).
  2. There exists a natural number t such that the Dobrushin coefficient of the t th iterate of p is greater than zero.
Stachurski suggests a more intuitive phrasing of the above theorem: suppose we run two Markov chains from two different starting points x and x'.  The dynamical system is globally stable if and only if there is a positive probability that the two chains will meet.  To me this sounds suspiciously similar to the definition of an ergodic dynamical systemHowever, am I correct to make this connection?  Is a dynamical system that has a positive Dobrushin coefficient necessarily ergodic?  

Tuesday, December 21, 2010

Blogging Economics from Schiphol Airport...

On my way home for the holidays!  Right now I am laying over at Schiphol Airport near Amsterdam.  I have about an hour or so till the plane leaves and then a seven hour flight...so I am going to do what any normal person would do:

Kill time by working on a model of international capital flows...

Tuesday, December 14, 2010

Back to Python and Markov Chains...

So I am back to programming in Python and working my way through Economic Dynamics: Theory and Computation.  I am in the middle of Chapter 4 at the moment and have just written some basic code for simulating the Markov-switching model of unemployment from Hamilton (2005).  I highly recommend a read of the paper.  It is fairly short, contains a neat little model, and after reading it I felt like I had a greater understanding of the dynamics of unemployment and the business cycle...

I will continue to work on the code over the holidays, and will push my it out to github for others to use...after I get my github repository set up!

Best network visualization that I have seen in a while...

Follow the link to read the story behind this image...if you look closely you will notice that there are actually no geographical borders plotted in this image...the "borders" emerge as a result of the geographical clustering of network connections between Facebook users...spooky!
Facebook visualisation

Rational Addiction...



I have known drug users...and I would not describe them as having anything remotely approaching time-consistent preferences...

Monday, December 13, 2010

Theory of Value: Final Thoughts...

I have just finished reading the final chapter on uncertainty in Debreu's Theory of Value.  This final chapter which is very short, simply introduces the idea of contingent commodities and then sketches how the theorems and proofs in previous chapters go through in this more general case.

Instead of sharing my thoughts about contingent commodities, I thought I would post some of my over all thoughts about the book and about general equilibrium more generally...

After reading this book I feel like I have a much improved understanding of both the mathematics and the economics of general equilibrium theory.  For an aspiring academic economist, this is clearly a good thing.  Unfortunately, I do not feel like I have a better understanding of how real-world  economies generally behave.   This is clearly not a good thing. 

After reading, say Minsky's Stabilizing and Unstable Economy, I felt that I was better equipped to talk about issues of serious importance in modern, capitalist economies.  I do not feel that way after reading Theory of Value.  General equilibrium, in my view at least, is not supposed to describe real-world economies but instead serves as a kind of null model for how an idealized economy should behave.  Thus perhaps comparing Theory of Value and Stabilizing and Unstable Economy is not very fair. 

However, even as a null model of the economy, I think general equilibrium falls short.  Any null model of the economy, in my opinion, must allow for the possibility that individual optimization decisions are influenced by the decisions of other individuals in the economy.  In the real world, economic behavior is a very social activity and preferences and individual decisions are heavily influenced by others actions and beliefs.  In the real world, there is also lots of trades.  Perhaps even lots of "false trades" (by false trades I mean trades at non-equilibrium prices).  In GE no one trades until the equilibrium price vector has been calculated, and then they only trade once. 

Once one allows for the possibility that a single agent's decisions can impact the decisions (and/or influence the preferences) of other agents, then micro-dynamics may no longer average-out in the aggregate.  Once one allows for "false trades" each false trade alters the wealth distribution amongst the agents in the economy which then shifts the equilibrium to which the economy would have converged.  In this world "equilibrium" is a moving target.

None of the above critiques of GE are original, and I had encountered all of them prior to reading Theory of Value.  Despite my criticisms, I am still very glad that I read the book and would recommend it to anyone who plans on pursuing an academic career in economics... 

Theory of Value, Chapter 6...

Just finished reading Chapter 6: Optimum.  Debreu lays out the conditions necessary for the equilibrium concept described in the previous chapter to be an optimum for the economy, and under what conditions any optimum can be supported as an equilibrium of a private ownership economy. 

Convexity of individual consumer sets, and of the overall economy's production possibilities set plays is important role for proofs to obtain.  Although the convexity requirement on the production possibilities set is only required to prove that any optimum can be supported by a market equilibrium under a certain price vector.

Only other comment is to point out that, according to Debreu, optimums are in general not comparable to one another (except in trivial case where everyone is simply indifferent between the two optimums).  Although, it is not entirely clear to me how this squares with the idea of Pareto dominance which is sometimes used to eliminate some market equilibria.  To compare between two optimums would require, I think, being able to make inter-personal comparisons of utility...    

Sunday, December 12, 2010

The Golden Rule...

"The golden rule of research is to carefully define your question before you start searching for answers..."
-probably many people

Unfortunately...I very often seem to disregard this rule, and start searching for answers to questions that I have not defined...this is a very inefficient search method.

Networks, cycles, and supply chains...

If you haven't already read it, check out this post on Stumbling and Mumbling...it is very similar in spirit to some things that James, Sean, and I have been talking about recently regarding network theories of complementarities in the production process and how this might generate business cycles...

Theory of Value, Chapter 5...

Chapter 5 of Debreu's Theory of Value is on economic equilibrium.  First some definitions...let xi be the vector of consumptions of for consumer i with i indexed from 1,...,m; yj be the vector of production for producer j with j indexed from 1,...,n; let wi denote the resource endowment of consumer i (the sum of these endowments over 1,...,m equals w or the total resources of the economy); finally, sij is consumer i's share in the profits of firm j. Net demand is x-y, excess demand is defined to be x-y-w.

The first few sections in the chapter define what an economy is, what market equilibrium is, and what attainable states of an economy are.  Basically attainable states of the economy are states where consumers are choosing a consumption bundle that is possible for them (i.e., satisfies their wealth constraint), producers are choosing a production that is possible for them, and that the market is in equilibrium (i.e., the net demand must equal the total available resources).  Economy equilibrium is then defined to be some subset of these obtainable states where consumers are maximizing utility and firms are maximizing profit.

Some questions about private ownership...
Private ownership of the means of production, I think, is simply Debreu's way of allocating the pure profit that producers make in equilibrium when production processes exhibit decreasing returns to scale.  How are the shares determined?   This does not seem to be addressed.  If all firms are identical, then it doesn't matter.  However if firms are not identical, then it seems to me that consumer i's wealth (and by extension his choice of consumption) would change depending on his idiosyncratic portfolio of shares in the producers.  There seems to be a missing market for stocks in this world... 

On an unrelated note: I have always felt that the shape of the production possibilities set was technologically determined, and that it was shares in this technology that where owned by the consumers (who I suppose may or may not also be the workers).  These shares then give a consumer claims to the proceeds from the sale of the output produced.  The point is, I have always thought of the consumers owning the production technology itself and not simply owning the output of production.  Anyone have thoughts on this? Do consumers own the technology of production? Or do they simply own the output?  If it is the latter, then who owns the technology of production? 

Proof of Existence: I will not comment on the proof, except to say that Debreu proves only existence and does not show uniqueness of stability.  

Also...in the end of chapter notes Debreu cites a paper by L.W. McKenzie called "Competitive Equilibrium with Dependent Consumer Preferences" which looks really interesting...unfortunately I can not seem to find in cursory Google search...

Saturday, December 11, 2010

Quote of the Day...

"Death had to take Roosevelt sleeping, for if he had been awake, there would have been a fight."
-Thomas Marshall, U.S. Vice President,
Commenting on the death of Theodore Roosevelt 

Quote of the Day...

"I happen to have a talent for allocating capital. But my ability to use that talent is completely dependent on the society I was born into. If I'd been born into a tribe of hunters, this talent of mine would be pretty worthless...but I was lucky enough to be born in a time and place where society values my talent, and gave me a good education to develop that talent, and set up the laws and the financial system to let me do what I love doing - and make a lot of money doing it. The least I can do is help pay for all that."
-Warren Buffett
When I was younger, I think I way over-estimated the correlation between abilities and outcomes.  There really is a lot of randomness in the world, and this randomness holds some (perhaps considerable) sway over individual outcomes.  How does one deal with the possibility that your lot in life may depend on a certain degree of randomness and not necessarily on one's individual talent?  Is it the case that despite the influence of randomness in our lives, that the optimal behaviour is still to behave as if everything was completely deterministic and within our control?  Does working harder minimize the impact of randomness in some sense? Or does working harder simply provide us with the illusion of control over our lot in life?  Somehow I think that society in general benefits from having citizens who genuinely believe that they are the master of their fate...even if this isn't completely true.  The incentives are better...  

Friday, December 10, 2010

Slide from my trade networks presentation...

So, today I gave my first presentation of work that I have been doing on the evolution of hierarchy and community structure in international trade networks.  I have included my slides below...you can also skip down to just below my slides and find a short description of the various plots (if you are interested)...feedback and comments are appreciated as always...
Network Fragility Short)


Some additional thoughts on the CCC plots...
Below is the plot of the CCC for OECD countries from 1962-2009 using data from UN Comtrade.  Grey bars represent U.S. recessions as defined by NBER.   I will focus my discussion on this plot (the other is similar...one of the nice things about limiting analysis to OECD countries is that the results are not dependent on choice of data!).  Note that five countries have yet to report for 2009. 
The evolutionary theory that I am testing predicts that environmental change will cause an increase in the modularity of the trade network.  Here modularity (really hierarchy) is measured by CCC.  He (2010) supposes that U.S. recessions are an indicator of environmental change.  I have a couple of issues with this.  First, he is using U.S. recessions which may or may not be a good indicator of global recessions (to my knowledge there is not a universally accepted measure of global recessions).  More importantly, while the CCC does indeed increase during recessions, the largest increases in the CCC seem to occur outside of recessions...see for example the entire decade of the 1960's!  Generally speaking, the economic environment in which global trade is being conducted is constantly changing and forcing individuals and companies to adapt along with it.  I think a better measure of environmental change is needed.

Not that this is the answer, but I think I will plot some measure of real oil price shocks against CCC ans see what that looks like...           

Thursday, December 9, 2010

Theory of Value: Chapter 4...

Having just finished putting the finishing touches on my lecture slides for my talk at tomorrow's workshop of network fragility here in Edinburgh, I am back to reading Gerard Debreu's Theory of Value. I am currently in the chapter on consumer theory.

I like the way Debreu emphasizes that the indifference relation is a complete binary relation that partitions the consumers choice set (i.e., the indifference relation is reflexive, symmetric, and transitive, and complete).  The interest in the utility function then follows from the fact that we would like to have some increasing function that associates each indifference class with a real number that can be used to distinguish it from other indifference classes.

The proof of existence of a utility function when one assumes a form of continuity of preferences is quite clever.  The proof shows that there exists a dense subset of a consumer's choice set.  Defines a clever increasing function on that subset, and extends the function from the dense subset to the entire choice set.  Then the function is shown to be continuous.

As in producer theory, convexity of the choice set is crucial.  Working through the three different types of convexity: weak-convexity, convexity, and strong convexity was worthwhile.  Weak-convexity allows "thick" indifference curves, convexity rules out such "thick" indifference curves, strong-convexity is the type of convexity that is taught to 1st year undergraduates as being one of the reasons that marginal rates of substitution decrease as one moves down an indifference curve. 

The wealth constraint. The proof of existence of equilibrium in a private ownership economy rests crucially on the continuity of the correspondence between the set of price-wealth pairs such that the set of possible consumption bundles is not empty and the choice set of our agent.  Why? I will let you know after I have read chapter 5 on equilibrium.  For now I cite Debreu...(I am going to guess that continuity of the correspondence is necessary to insure that our profit maximizing producers do not choose to produce an amount of output that falls into a "hole" so to speak in the set of consumer utility maximizing bundles...I will let you know if this intuition turns out to be correct or not!)   

Social Dynamics of Tribal Wars...

A very interesting post about an online game called tribal wars.  Here is the Wikipedia entry.  I wonder if such games will ever become a possible (and then eventually acceptable) source of data for academics?

Wednesday, December 8, 2010

Theory of Value: Chapter 3 (cont'd...Again!)...

Last post on producer theory.  In his end of chapter notes, Debreu makes underlines three things that are not covered by the producer theory that he has described.  I repeat them below as I think they merit attention:
  1. External economies and diseconomies: the case where the production set of a producer depends on the production sets of the other producers (and or on the consumptions of consumers).  Both of these cases are, I think, incredibly likely to occur in the real-world.  In fact such interdependencies are well modeled by networks.
  2. Increasing returns to scale: one of my new favorite pastimes...these can also be well modeled with networks (although they can be well-modeled via other methods as well).
  3. The behavior of producers who do not take prices as given: monopolistic competition...I would go so far as to submit that some form of monopolistic competition is a superior null model of producer behavior than perfect competition.

Theory of Value: Chapter 3 (cont'd)...

Producer Theory and Profit Maximization...where exactly are opportunity costs accounted for within the general equilibrium framework?  This question was posed to me by one of my first year undergraduates this year (in a slightly different form!) and I don't think I had a very good answer for him.

As I read through Debreu's axiomatic treatment of profit maximization I find myself asking the same question.  Where are opportunity costs taken into account?  Are opportunity costs essentially a special type of contingent commodity that exists in perhaps a different time and place with its own price?

This matters because the assumption of additivity and the possibility of inaction implies that the maximum profit of a producer either does not exist or is null.  Null profit in equilibrium makes sense to me IF one is talking about economic profits and not accounting profits.  Economic profits requires taking opportunity costs into account...

Anyone out there have any thoughts on this one...or is this discussion just too pedantic 

Theory of Value: Chapter 3...

Chapter 3 is on producer theory.  I was rolling right along without problems through the first few pages until I encountered the following:
"A production yi is classified as possible or impossible for the ith producer on the basis of his present knowledge about his present and future technology.  The certainty assumption implies  that he knows now what input-output combinations will be possible in the future (although he may not know the details of the technological process which will make them possible)."
How could you know the input-output combinations that are possible in the future without knowing the technology?  Seems a bit weird to assume certainty, but then to also assume that producers have perfect knowledge about everything except the technology used to produce things. 

A more interesting comment appears in Debreu's discussion of the various assumptions made on a producer's production possibilities set.  While discussing various interpretations of the additivity assumption, Debreu writes as follows:
"In so far as the [production possibilities set] for a producer represents technological knowledge, it is clear that two production plans separately possible are jointly possible.  Alternatively the jth producer can be interpreted as an industry rather than a firm; then the additivity assumption means that there is free entry for firms into that industry.  Under additivity if yj is possible than so is kyj, where k is any positive integer.  Therefore additivity implies a certain kind of non-decreasing (i.e., increasing or constant) returns to scale."
It is this last comment that additivity implies a certain kind of non-decreasing returns to scale that stopped me.  I see why k has to be an integer (additivity implies that yj + yj +...+yj  = kyj must also be possible). I suppose I had just forgotten that constant returns to scale act as lower bound when we assume additivity (i.e., that decreasing returns to scale are not possible).

The next assumption discussed is convexity.  Convexity implies non-increasing returns to scale (convexity plus the no-free-lunch assumption rules out increasing returns).  Thus if one wants to assume additivity and convexity of the production set for a particular producer, then the production technology must exhibit constant returns to scale.    

Theory of Value, Chapter 2...

So, as yet another side project, I am reading Gerard Debreu's Theory of Value: An Axiomatic Analysis of Economic Equilibrium.  It has rekindle my interest in abstract mathematics, and as an economist it has so far proved helpful in understanding the particularities of General Equilibrium theory in more detail.

Right now I am reading Chapter 2: Commodities and Prices.  From my MSc I was aware that Arrow-Debreu general equilibrium assumed the existence of markets for all commodities, where commodities are completely specified by their intrinsic characteristics, the time that they are acquired, and their location (i.e., Red Winter Wheat, today, in Chicago is a different good from Red Winter Wheat, a year from now, in San Francisco, etc.).

I was not aware however, that the commodity space that defines all possible combinations of these commodities has finite dimension and that time is also taken to be finite.  Even the claim that the commodity space has finite dimension for a fixed moment in time seems to be implausible.  Intuitively, economic growth would seem to require (or be driven by) continual innovation of new commodities, but I am also not sure how one is to think of commodities that have not been created yet with this framework.  Are they to be accommodated by allowing the dimensionality of the commodity space to increase with time?  Or perhaps this is being abstracted from in the general equilibrium framework.

Debreu addresses some of these critiques in his end of chapter notes.  Note 2 says that it is the assumption of finite time that allows the commodity space to be of finite dimension.  He goes on to say that many of the results to follow can be extended to an infinite dimension commodity space.  I am still not sure whether this addresses my concern about the ability of the theory to deal with commodities that have yet to be invented...

I should mention that I do find the theory quite elegant.  It kind of cool the way you derive the exchange rates, interest rates, and discount rates from the price system as long as you have a unit of exchange.  Here it is assumed that there exists some unit of exchange (I suppose that this is why so many economists have devoted their careers to developing theories of where money comes from...which is something else that I have never understood!) 

Sunday, December 5, 2010

On the Principal of Continuity of Approximation...

"If the conditions of the real world approximate sufficiently well the assumptions of the ideal type, the derivations from these assumptions will be approximately correct."
Many thanks go to Cosma Shalizi for posting this critique by  by Herb Simon and Paul Samuelson of Milton Friedman's infamous essay "Methodology of Positive Economics"

The quote above is from Simon's section.  Also be sure to read Samuelson's section where he posits
"that the non-positivistic Friedman has a strong effective demand which a valid F-Twist brand of positivism could supply"
Just brilliant...

After reading things like this it is very, very unclear to me why I should care at all about the economic and policy implications of a macro model where agents have rational expectations...

Brilliant Lectures on Finance...

In my spare time I am putting myself through Robert Shiller's course at Yale on Financial Markets...I just finished lecture 5 on insurance.  My main takeaway so far is that financial innovation is very closely related to institutional innovation.  Institutions significantly constrain and shape the types of financial innovations that are possible or achievable.  The link between institutions and finance was not one that I appreciated adequately prior to this course...

Just Finished Some Summer School Apps...

I just finalized my application for the Sante Fe Summer School on Complex Systems, and my application for a summer internship at the World Bank...this still leaves my applications for internships at the IMF and the Federal Reserve to finish.  If I fail to get accepted to the Sante Fe Summer School, I am going to apply to the Sante Fe Program on Computational Social Sciences as well...

Anyone else have ideas for summer internships?

Saturday, December 4, 2010

Posting will be eratic this week...

I am working on my presentation for the upcoming workshop in Edinburgh on Network Fragility and as such posting will be highly limited this week.

Tuesday, November 30, 2010

The Vaults and Garden Cafe...

Coming at you, post-"Community Detection in Multi-Slice Networks," from the Vaults and Garden Cafe in Oxford...

Dr. Mason Porter gave a very nice and thorough seminar on applications of his method of detecting communities in multi-slice networks.  Applications ranged from financial networks to actual human brains (and everything in between).  The technique itself is deceptively simply:  one simply seeks the community partition that optimizes a well-known network quantity called modularity (where modularity is measured in terms of departure from some chosen null model).  However, actually implementing the technique can be extremely challenging.

I am interested in applying these techniques to international trade (which to Dr. Porter's knowledge has not yet been attempted).  To start, what I need is to find an appropriate null model for international trade networks.  I will then find a community partition that maximizes modularity relative to what one would expect given my null model.  Typically, null models are random network models...so which random network model is most descriptive on the international trade network?  What about Kronecker graphs?

The White Horse Pub...

I almost forgot...I had an excellent meal last night in the White Horse pub near Balliol College.  The pub itself was founded in 1591!  I had planned to go search out a good curry on Cowley St., but became distracted by a hand-scrawled chalkboard offering "Fresh Game Pie and Local Ale." 

So instead of a curry dinner...I had a rabbit, venison, and boar pie with potatoes.  I spent the remainder of my evening at the White Horse by the fire with two pints of a local ale called "The Village Idiot" and a newly purchased book: Marx: A Very Short Introduction...

Complexity Catastrophes...

On the train down to Oxford I read through almost all of the proceedings from the original Sante Fe Conference on economics as a complex system.  I found the following chapters to be particularly useful for economists:
  1. The Evolution of Economic Webs, Stuart Kaufman
  2. Persistent Oscillations and Chaos in Economic Models, Michele Boldrin
  3. Self-Reinforcing Mechanisms in Economics, W. Brian Arthur
  4. Computation and Multiplicity of Economic Equilibria, Timothy J. Kehoe
  5. Rational Expectations, Game Theory and Inflationary Inertia, Mario Henrique Simonsen
  6. The Global Economy as an Adaptive Process, John H. Holland
Of the above chapters, Kaufman's on the evolution of economic webs is by far the most thought-provoking.  I am particularly fond of his ideas on complexity catastrophes, as they parallel some of my own ideas on the dangers posed by overly dense connectivity in financial networks. 

Monday, November 29, 2010

Afternoon in Oxford...

After spending most of the early morning walking around Oxford trying to find a coffee shop that doesn't force you to pair for wifi service, I have now settled down to work on my international trade networks presentation in the Cafe Nero above the original Blackwell's bookstore...apparently the Blackwell's is close enough to Trinity College that I can access its eduroam wifi.

On a side note, I must say that  I am underwhelmed by the coffee shops that I have encountered in Oxford so far (with the exception of Green's Cafe)...Edinburgh wins easily using the coffee shop metric...

Are there any Oxford alums that are reading my blog? If so, suggestion regarding food, coffee, and pubs would be much appreciated...tonight I am going to go searching for a good curry over on Cowley St.

Saturday, November 27, 2010

Edinburgh at Dusk...

Took this photo from the top of Arthur's Seat around 4:30 pm...
 
I am heading south to Oxford University tomorrow to attend a seminar on the evolution of community structure in multi-slice networks at CABDyN.

I have never been to England before (I do not count layovers at Heathrow), and I am excited about this first (of hopefully many) visits to Oxford University.

Tuesday, November 23, 2010

Computational Model of Trade...

Just need to jot down the specifics of an idea I had about a computational model of trade with endogenous network formation...

The world consists of an exchange economy with N agents arranged in a circle.  Each agent is described by the following parameters:
  1. Parameter, r~U[0,1] describing their level of risk aversion.
  2. Parameter, v~U[a,b] describing their vision.  This vision parameter tells how many agents to the left and right are in an agents neighbourhood.  Vision could also be the same fixed v for all agents to simplify things.  Assume that agent's have perfect information about the endowments etc of all other agents in their neighbourhood/vision.
  3. Each agent is endowed with some amount of two goods sugar and spice.  Endowments could be distributed uniform on some interval.
  4. Agent have the same utility function which takes the amount of sugar and spice as arguments, and also must include risk aversion somehow.  I am open to suggestions as to what utility function would be most appropriate.
Agents would then have to decide whether or not to trade (if they wanted to trade at all) at "home" within their neighbourhood/vision or "abroad" by linking up with some other agent about which they know nothing (except the distributions of risk aversion, vision, etc.).  Of interest to me are the following...
  1. Do agents with higher levels of risk aversion trade at home more often? Do agents with less risk aversion trade abroad more often?
  2. What type of trade networks evolve through this process? Network structure within time steps and network structure aggregated across time steps would be of interest.
  3. What are the equilibrium properties of such a model?  Is there meaningful convergence?  If so, how fast?  Is the equilibrium Pareto efficient/Pareto optimal?
I wrote this down very fast, and no doubt left out details necessary to close the model, but I thought I better write it down lest I forget...

International Trade Network: A First Pass...

I will be presenting a talk on the structure of international trade at a networks workshop being held here at the University of Edinburgh.  What follows is a dump of my thoughts related to the subject which so far are largely based on my attempt to replicate recently published work on the structure of international trade network.

International Trade Network Data...
There are two principle sources of network data on international trade (that I have been able to find):
  1. Prof. Kristian Skrede Gleditsch at the University of Essex:  Data are from 1948-2000 and the primary source is the IMF.
  2. UN Comtrade: Data are from 1962-2009 and the primary source is of course the UN.  
  3. There is a third data source, the Economics Web Institute, that I would like to dissuade people from using even though they use Prof. Gleditsch's IMF data (without going into too much detail, the Economics Web Institute, seems to use an inconsistent methodology to assign edge weights (trade values) to countries when converting from Prof. Gleditsch's raw .asc files to .xls workbooks). 
If you would like to hear the gory details on the data validity issues feel free to contact me.

Hierarchical Clustering and Trade Network Density:
For the moment I am simply trying to replicate the work of the Deem et al. (2010) paper (linked to above).  I have applied a average linkage hierarchical clustering algorithm to the OECD international trade network as outlined in their paper.  Below are some plots of the cophenetic correlation coefficient (CCC) and network density for the international trade network for OECD countries using two different data-sets.  The first plot uses data entirely from the UN Comtrade database.  NBER recessions are marked with gray bars.  I downloaded the data by hand from Comtrade (commodity code is SITC ver. 1 AG0) and then used a Python script to clean and reorganize the .xls spreadsheets into more manageable text files.  Statistical analysis is done using SciPy, network analysis (so far) has been done using NetworkX, and plotting has been done using Matplotlib.     
The plot below is the CCC and network density for the international trade network for OECD countries using Prof. Gleditsch's IMF data for 1948-2000 and then UN Comtrade from 2001-2009 (commodity code this time is SITC ver. 3 AG0). 
There are some differences between the two plots of the CCC (I have not yet tested whether or not they are significantly different nor have I tested whether or not the CCC jumps significantly during/after recessions...this is on my to do list!).

Still to come:
  1. Dendrogram of identified clusters
  2. Results of community structure algorithm applications
  3. Weighted clustering algorithms and other graph measures
Basically I am just trying to learn to do network analysis using Python by applying the tools to an interesting topic...

Saturday, November 20, 2010

Ergodic Theory: A Verbal Monte-Carlo...

An interesting verbal example, a verbal monte-carlo if you will based on a post by Robert Vienneau.  I suspect one of my macro profs Sevi will like about ergodic and non-ergodic processes goes as follows:

Suppose I observe the consumption sample paths of 10,000 individuals over 10,000 units of time.  Let us also now make the completely implausible assumption that these 10,000 consumption paths were generated by the sample process.  Suppose that I pluck one sample path and look at the distribution across time.  Now suppose that I pluck out the observation at t=350 from each of the 10,000 sample paths and look at the distribution.  If this process was ergodic, then these two distributions should converge to one another in large enough samples. 

With ergodic processes, distributions across time and distributions across people should be the statistically the same (in large samples).  If the process is non-ergodic, then the distribution across people at a given moment in time and the distribution across time will not converge.  Sevi always comments about how summing (aggregating) across people, and summing across time are not always equivalent statements...is this the same as saying that economic processes in such cases are non-ergodic?  I don't know. 

Now let's go back and relax the ridiculous assumption that all agents have the same process that determines there consumption.  With heterogeneous agents even if each agent individually is following an ergodic process, the aggregate distributions across agents and across time will be a mixture of ergodic processes and therefore must be non-ergodic (I think).

Whether or not the above mentioned processes are stationary or non-stationary and why is still a mystery to me.  In Vienneau's example, the process he used in his actual monte-carlo was stationary but non-ergodic. 

Friday, November 19, 2010

Musings on Ergodic Theory...

There comes a time in every man's life where he feels that he should know more about ergodic theory than he does, for me that time arrived at 2:30 pm this afternoon while reading Brian Arthur's Increasing Returns and Path Dependence in the Economy

First I would like to prove to myself that Cosma Shalizi's assertions in this post are in fact correct.  Specifically he claims that...
"It is not true that non-stationarity is a sufficient condition for non-ergodicity; nor is it a necessary one."
This says that non-stationarity does not imply non-ergodicity.  I want to prove this by contradiction, so I need an example of a non-stationary process that is ergodic.
"It is not true that 'positive destabilizing feedback' implies non-ergodicity."
Again to prove by contradiction, I need an example of an ergodic process that exhibits positive destabilizing feedback 
"It is not true that ergodicity is incompatible with sensitive dependence on initial conditions."
Here I need an example of an ergodic process that exhibits sensitive dependence to initial conditions.  Cosma has already pointed out in his post that chaotic processes will generally serve as an example of an ergodic process with sensitive dependence on initial conditions
"It is not true that ergodicity rules out path-dependence, at least not the canonical form of it exhibited by Arthur's models"
Finally, I will need an example of an ergodic process that also exhibits path-dependence. 

I am currently reading Scott Page's essay on path dependence and I suspect that I will be able find several of the examples that I will need included in the text.

While all of this might seem very far removed from economics, I think understanding all of the above will provide useful constraints on the types of macroeconomic modelling techniques that I should pursue...at least this is my hope. 

Thursday, November 18, 2010

Quote of the Day...

"Equilibrium is blither"
-J.M. Keynes

Friday, November 5, 2010

Quote of the Day...

"I believe that something drastic has happened in computer science and machine learning. Until recently, philosophy was based on the very simple idea that the world is simple. In machine learning, for the first time, we have examples where the world is not simple. For example, when we solve the "forest" problem (which is a low-dimensional problem) and use data of size 15,000 we get 85%-87% accuracy. However, when we use 500,000 training examples we achieve 98% of correct answers. This means that a good decision rule is not a simple one, it cannot be described by a very few parameters. This is actually a crucial point in approach to empirical inference.
This point was very well described by Einstein who said "when the solution is simple, God is answering". That is, if a law is simple we can find it. He also said "when the number of factors coming into play is too large, scientific methods in most cases fail". In machine learning we dealing with a large number of factors. So the question is what is the real world? Is it simple or complex? Machine learning shows that there are examples of complex worlds. We should approach complex worlds from a completely different position than simple worlds. For example, in a complex world one should give up explain-ability (the main goal in classical science) to gain a better predict-ability."

-V.N. Vapnik

My question is: What, if any, applicability does this quote have to economics?  Should we be willing to trade-away explain-ability for predictability? 

Thursday, November 4, 2010

Self-Contained Development Environment...

My Mac and I have been having a fairly serious domestic dispute over which version of python I am allowed to use...tonight I finally won!  I have succeeded in setting up a self-contained development environment for python on my Mac using MacPorts... I highly recommend the Stack Overflow post on the subject.

Monday, October 25, 2010

Slight PhD Research Detour...

My PhD research has taken a slight detour over the last couple of days.  In order to do theory I need to work with data, and there is just not a lot of publicly available data on financial networks at the moment.  So I decided that for the time being I am going to do some empirical research on trade networks using data from the UN Comtrade database

The inspiration for my research comes from the the following paper on the evolution of international trading networks.  The paper basically postulates that international trade is best described as a specific type of evolutionary system that satisfies the following three requirements:
  1. The dynamics of the international trade system are "slow" to respond to environmental change
  2. That environmental change is present
  3. Information is exchanged between agents in the system
The authors have studied this type of evolutionary system in previous research and two of their papers on the details of their evolutionary theory can be found here and here.  Their theory makes three specific predictions about the evolution of international trade networks:
  1. Decreased modular/hierarchical structure in the world trade network increases the sensitivity of the network to recessionary shocks
  2. Decreased modular/hierarchical structure decreases the rate of recovery from shocks
  3. Recessions (negative shocks) should spontaneously increase the modular/hierarchical structure in the trade network
According to the authors, all three of these predictions are borne out in the data...I am going to attempt to replicate their results.  Their theory implies that the modular/hierarchical structure that forms in response to environmental shocks (recessions) increases the resistance to and rate of recovery from shocks (recessions).  Globalization reduces modular/hierarchical structure in the global trade network and thus should lead to increasingly large recessions and a decreased rate of recovery from these recessions.

I have already written Python scripts to download the UN trade data and combine it into a single text file for use in the analysis.  I will be building an on-line code repository in the near future where people can come and download my code so that they can attempt to replicate MY results...

I would be interested in comments from readers concerning what standard economic theory I could bring to bear on this problem...I suspect that there is quite a bit of support for this line of research in more mainstream economics, but I could be wrong...

Sunday, October 24, 2010

Sage is Awesome!

I just started playing around with Sage this morning and am already hooked!  The pedagogic possibilities alone would make learning to use Sage worthwhile for economists...imagine a classroom where the lecturer was simultaneously plotting production possibility sets in 3D and then manipulating them as she chatted about returns to scale, diminishing marginal products, sunk costs, etc.

On the research end, Sage has most (if not all) of the functionality of MatLab, Mathematica, Magma, etc and because it is Python based it incorporates a large number of additional features absent from those other platforms...and it is totally free!

Friday, October 22, 2010

INET Grants...

INET grants have been announced (some time ago actually...)!  Take a look and see if any of the funded projects sound particularly interesting...while all of them sound interesting to me, one project in particular stands out...I will leave it to you to guess which one!

Python Related Activities...

Today I am debugging my code for power-law estimations...hopefully I will have working beta version by tonight!  If/when I do, I will post it somewhere for others to use...

Also, I downloaded Sage today...it will go on my list of things Python related things to learn.

Tuesday, October 19, 2010

Remiss on my Posting Again...

Teaching is hard work (and also quite time consuming)!  A quick update on my research, with more to follow later today...

For the last two weeks, I have been teaching myself how to program in Python.  I have now done a quick pass through tutorials for all of the major modules that I suspect I will be using:
  1. NumPy
  2. SciPy
  3. NetworkX
  4. PyGraphviz
  5. SymPy
  6. Matplotlib
  7. RPy2
 I now plan to double back and do more detailed work with each of the packages goals in mind.  The first thing I plan to do is implement the Clauset et al (2009) procedure to detect power laws in  empirical data using Python and then systematically apply the procedure to major economic variables.  The Clauset et al procedure relies on MLE and an iterative application of the Kolmogorov-Smirnov (KS) test to determine the threshold value above which the power law is a good model.  The authors are heavily critical of least-square approaches to fitting power laws to data and detail the issues with using such procedures.  Interestingly most empirical studies of power laws in economics (that I have seen) seem to use ad hoc least-squares approaches.

Whether the results of this study will be publishable or not I have no idea, but I think it would be very useful to have an understanding of which economic variables exhibit power law behavior, and which are simply heavy-tailed (i.e., log-normal or something else)...if nothing else I will add a new tool to my arsenal that has significant practical applications...

Saturday, October 16, 2010

A Great Loss...

Benoit Mandelbrot died today...a great light has gone out...Such an intellect comes only a few times each generation.  He was a truly original thinker, and one of my role models...

He will be truly missed.

Tuesday, October 12, 2010

The Aonach Eagach...

First major hill-walking trip of the year to Glencoe this past weekend.  A group of us tackled the Aonach Eagach Ridge on the north side of Glencoe...here is map of the hike:

The ridge is composed of two Munros and two tops and the hike is rated as a Grade-II scramble (which I would say is a fair assessment of the difficulties).  I have climbed Liathach in Torridon (another Grade II) before and I would rate the Aonach Eagach ridge to be more difficult/interesting scrambling than Liathach.

Though things started out cloudy, by the time the group reached the scrambling section the weather was excellent.  I was even able to grab a picture of a Brocken Spectre...


I will add some more photos once I have a chance to get more from friends...my next post will detail my ill-fated hike the next day up the Ballachulish Horseshoe...

Tuesday, October 5, 2010

Sound Familiar...

For any readers who happen to be interested in economics take a look at the following article on Wikipedia...sound familiar?

I came across a reference to the "ecological fallacy" in Daniel Katz's slides on Schelling's segregation model...

For Those Interested in ABM...

I came across an excellent set of lecture slides from Daniel Katz at Michigan on computational modeling for social sciences, and for those interested in NetLOGO there are an excellent set of introductory tutorials on Youtube

Sunday, October 3, 2010

4th Cooked Breakfast of the Year...

Deacon Brodie's...coffee was terrible but the full Scottish breakfast was excellent!  Definitely the best I have had so far...and it was relatively cheap (about 7 GBP).   Breakfast included 2 rashers of bacon, 2 sausages, large portion of Haggis, beans, fried egg, potato scone, plum tomato, two large mushrooms, and crusty bread.  I would highly recommend it...

Friday, October 1, 2010

Goldmine of Information on using Python in Social Sciences...

I came across this gem of a post on Zero Intelligence Agents while waiting for Business School IS support to install Python, NetLOGO, and R onto my office computer...

Sunday, September 26, 2010

Third Cooked Breakfast of the Year...

Had breakfast this morning at Gourmet Grub on Rose Street...I ordered the Ultimate Breakfast which consisted of Venison sausage, bacon, scrambled eggs, toast, and Haggis.  It was very good, but a bit pricey...

Wednesday, September 22, 2010

Totally Remiss on my Blogging...

OK...I am back.  Have been preparing for the start of the teaching term and as such have been a bit absent minded when it comes to blogging.  I have another meeting with my PhD adviser, Ed Hopkins, tomorrow afternoon and assuming he concurs with my research proposal for the next year I will move out smartly.

I have finally obtained access to the CRSP data (which was used in the Billio et al paper on econometric measures of systemic risk) and I plan to start my research by duplicating part of their work and then moving on the analyze a similar data-set for the UK.  I also have plans to ground some of the systemic risk measures a bit more in economic network theory, and then develop a couple measures of my own...

On the theory side of things, I am working on refining a set of research questions to tackle the complexity of large economic networks.  What I am going to try and tackle first is the following: I want to develop a economic model that captures...
  1. Densification power laws: networks are becoming denser over time, with the average degree increasing (and hence with the number of edges growing super-linearly in the number of nodes). Moreover, the densification follows a power law pattern
  2. Shrinking diameter: The effective diameter is, in many cases, actually decreasing as the network grows.
This idea is motivated by empirical findings and other work from this paper by Leskovic et al.  I have no firm modeling strategy yet, but strategic complements and some type of localized information structure will almost certainly be involved.

Sunday, September 19, 2010

Second Cooked Breakfast of the Year...

This morning I had breakfast at Ryan's Bar.  Menu included: 2 rashers of bacon, 1 sausage, scrambled eggs, blood pudding, 2 hash browns, 2 potato scones, beans and toast, grilled mushrooms and tomato.

The beans and toast was an excellent addition, and although the blood pudding was good...I missed my haggis!  Overall, I would say the breakfast was a step-up from last week's breakfast at Always Sunday on High St...

However, the coffee was terrible...although I noticed that the bartender who made my coffee had just started, so maybe that had something to do with it... 

Saturday, September 18, 2010

Datastream Tutorial...

So it looks like tomorrow is going to be spent in the library learning how to use Thomson-Reuters Datastream.  I have a nice idea for an empirical study of financial networks that will help me get started on my PhD...but first I will need to collect data on stock prices, market capitalization, and sector code for all stocks on the FTSE 100.  I think (hope?) that I can complete the vast majority of the work by Winter holiday....

Today's Afternoon Run...

Didn't run outside...although I wish I would have it is really nice in Edinburgh today!  Shower broken down at the flat (for the 4th straight day)...so I went to the gym and ran on the treadmill...

Introductory Maths and Stats: Intertemporal Optimization...

This lecture should be cut.  Material covered is not really used enough in the core curriculum to justify spending anytime on this...some version of the material could be included as a separate handout over winter holiday.

Introductory Maths and Stats: Discrete-time Intertemporal Optimization...

This should be the culminating lecture of QM0.  Students should be able to understand the difference between static and dynamic optimization.  The intuition for this can be built by focusing time on explaining how one derives the life-time budget constraint from the budget constraints for each time period (basically just add/aggregate, integrate depending).  Really invest time in going through the details of exactly what the budget constraint is how it is derived etc.  This is important as it comes up again and again...

Intertemporal Choice: Two-Period Example: Wouldn't really change much from this section, it is nicely written and hits all the high points...

Intertemporal Choice: T-Period Example: Here the focus is on the permanent income hypothesis, which is a pretty good example that demonstrates the techniques involved.

More Complicated T-period Example: This section should be cut from the lecture and covered in tutorials...this would allow lecturer to move through the above material at more measured pace.  I would jump from the permanent income hypothesis material straight to the simple discussion of dynamic programming.

Simple Discussion of Dynamic Programming: Section is good, although the maths needs to be simplified a bit so as to coincide with the permanent income hypothesis section that would proceed it.  Perhaps tutors could extend the dynamic programming case in the tutorials...

Introductory Maths and Stats: Kuhn-Tucker Theorm...

Intuition: Since most economic constraints are inequality constraints not equality constraints, it makes sense for students to learn a bit about Kuhn-Tucker theory...to build intuition I like to draw pictures in order to demonstrate the different sets of complementary slackness conditions for a single variable function in both the maximization case and the minimization case.  There would be six diagrams that clearly emphasize the corner solutions v. the interior optimum, and the idea of a binding constraint versus a slack constraint.  Remember, if one of the constraints is slack the other MUST be binding!

Kuhn-Tucker Theorem: After building intuition with diagrams in the single variable case, I would jump straight to the Kuhn-Tucker theorem and the corresponding algorithm used to solve inequality constrained optimization problems.

General Case: The notes on the general case are confusing and I am not sure that they add to the student's understanding of how to apply Kuhn-Tucker.  I would recommend cutting the notes on the general case and spend more time working problems and making sure that the students understand the difference between slack constraints and binding constraints...

Friday, September 17, 2010

Morning Swim...

A little late on the post, but I finally have my gym card and this morning I went for a short swim.  Did about 750 m total of freestyle, breast stroke, and kick board.  Plan on going for a run tomorrow morning, and I will start lifting on Monday...

Introductory Maths and Stats: Static Unconstrained Optimization of N-Variable Function...

The title is a mouthful, but the lecture itself is fairly straightforward (aside from the notation being a bit complex)...

Rules for Single Variable Optimization:
  • If df/dx=0 and d^2f/dx^2<0 at any point x0, then x0 is a local max
  • If df/dx=0 and d^2f/dx^2>0 at any point x0, then x0 is a local min
  • If df/dx=0 and d^2f/dx^2=0 at any point x0, then necessary but not sufficient conditions exist for x0 to be an inflexion point...
N-variable case is similar, but the notation becomes more complex (uses vectors instead of single variables etc.) 

The Two Variable Case: The discussion in the lecture notes of unconstrained optimization with two variables is good.  I particularly like how emphasis is placed on using Taylor expansions in the argument.  I would only recommend that more pictures be included.  Anytime a Taylor expansion is used, it just screams DRAW A PICTURE!!!

Quadratic Form, Definite Matrices and Hessians: I would like to see this discussion moved up a bit.  Hessians should be introduced in lecture 2 on multi-variable calculus.  The maths notes should link more closely with the stats notes (particularly the linear algebra parts).   Definite matrices should be emphasized in both the maths and stats, and a solid amount of lecture and tutorial time should be spent on the concept.  Definite matrices provide the coat-hanger on which much of the linear algebra that is used in microeconomics and QM hangs...

Concavity and Convexity: Again draws pictures.  Emphasize that the definitions are almost identical to the single variate case.  Only difference is that we are dealing with vectors now and not scalars in the argument of the function.

Chain Rule and the Envelope Theorem:  Material on the Envelope Theorem is scattered across three lectures.  I think the best think to do is devote an entire lecture to the envelope theorem after all of the necessary maths have been developed.  This would serve as a useful mid-course refresher for the students, and I think would make the theorem more understandable.  It is important, and thus I think it should get its own lecture...

Economic Applications: Solow Efficiency Wage model should be cut out of lecture and covered in a tutorial.  This would open up more lecture time for other more important topics...

The entire section that covers the derivations of the OLS equations using maximum likelihood should be cut from the lecture and converted into a handout for the students to study over winter holiday, it is very long and too complicated to ask about on the QM0 exam.  Lecture time and tutorials would be better spent elsewhere... 

Introductory Maths and Stats: Multi-Variable Calculus...

This is a continuation of my notes for my intro maths and stats tutorials.  This is my summary of Lecture Two: Multi-Variable Calculus...

Partial Differentiation: Easy to extend differentiation from single variable to multi-variable case.  Say you have f(x,y), then to take partial derivative with respect to x simply treat y as a constant and take the derivative of f with respect to x like single variable case!  That's it...also higher order derivatives are calculated by successive application of differentiation.  Demonstrate that cross-partial derivatives are equal (if f is well-behaved)!

I think it would be worthwhile to also mention the Hessian Matrix (matrix of second derivatives).  Talk about special cases when matrix is positive (semi) definite of negative (semi) definite.  Can also use it as an excuse to talk about eigenvalues, eigenvectors, determinants, etc from linear algebra.  Example: f(x,y)=x^2 + y^2...

Total Differentiation and Chain Rules: I totally agree with Yu Fu...one should not try to memorize all of the chain rules related to partial differentiation there are just too many combinations and cases.  Better to focus on understanding the concept of total differentiation and then the difference between independent and intermediate variables.  For example: suppose we have the usual case in economics where f(x(t), y(t)) and t=time.  In this case the independent variable is t, and the dependent variable is f (x and y are only intermediate variables that "filter" the effect of t on f). 

Implicit Functions and Differentiation: Just another application of partial differentiation and chain rules...

The Envelope Theorem: Understanding the envelope theorem is key in microeconomic price theory.  Mathematically, the envelope theorem is simply an application of chain rules, total differentiation, and partial differentiation!  No sweat...

Systems of Implicit Functions and Jacobian Determinants: BLAH!  OK, first I think the lecture notes need to be re-ordered so that the lecturer reviews determinants, Cramer's rule etc. BEFORE tackling this section.  Note that Cramer's rule is a REALLY inefficient way to solve a system of linear equations!  For QM0 exam the students may have to compute 3x3 determinant, so they need to know a formula for it...I would go with the Co-factor expansion...

Leibnitz's Rule: This is a cut I think...should be covered in detail by lecturer on Ramsey model in Macroeconomics I...

Integration with Several Variables: Move towards the beginning...this is very straightforward and should probably be talked about right after partial differentiation...

Homogeneous and Homothetic Functions: This is a cut.  Not because it isn't important...it is very important (implications of CRTS and such) but I think that the lecturer should cover these topics in class during term.  There is already too much material in the QM0 lecturers and this would allow for more detailed coverage of other topics...

Linear Dynamic System: If we want to keep this material in course, then we need to do a much better job of teaching eigenvalues, eigenvectors, and matrix diagonalization techniques.  Would recommend moving Appendix on eigenvalues, and eigenvectors into the lecture notes and teaching students how to reach the general solution of a linear dynamic system properly...

Introductory Maths and Stats: Single Variable Calculus...

As a first year Phd student I will be teaching introductory maths and stats to the MSc students this year. I am going through the lecture notes and making little notes for myself about things that I think should be emphasized (or de-emphaszied) in the turorials as well as some little tricks that I have picked up along the way that should be helpful for the incoming MSc students.  The following are my notes to myself on single variable calculus...

Rules of Differentiation: The derivtive is a linear operator.  Mathematically this means that d/dx(f(x) + g(x))=d(f(x)) + d(g(x)) and d/dx(t*f(x))=t*d/dx(f(x)).  In words this means that the derivative of any linear combination of well-behaved functions is equal to the same linear combination of the derivatives of the individual functions.  Note that if you remember that the derivative is a linear operator then you automatically know how to take derivatives of sums and differences of functions.

Other Rules I remember:
  1. Constant: The derivative of a constant is always zero.
  2. Powers: If f(x)=x^k, then df(x)/dx=kx^(k-1)
  3. The Chain Rule: NEVER forget the chain rule! d/dx(f(g(x)))=df/dg*dg/dx.  Most simple mistakes in taking a derivative come from forgetting about the chain rule.
  4. Derivative of the exponential and the natural logarithm functions: Easy...d/dx(e^x)=e^x (this result is one of the reasons that exponential functions turn up so often in the general solutions to differential equations), and d/dx(ln(x)=1/x.  Maybe review some basic properties of logarithms and exponentials...
  5. Product Rule: d/dx(f(x)*g(x))=d/dx(f(x))*g(x) + f(x)*d/dx(g(x))
Rules I never remember:
  1. Quotient Rule: Why? Because the quotient rule is simply an application of the product rule and the chain rule.
  2. Rule for d/dx(a^x): Why? Because it is better to just take natural logarithms and the differentiate.  For example, if f(x)=a^x then ln(f(x))=ln(a^x) and because I know the my properties of logarithms, the rule for taking d/dx(ln(x)) and the chain rule this becomes d/dx(ln(f(x))=[1/f(x)]*d/dx(f(x))=d/dx(ln(a^x))=ln(a) and finally d/dx(f(x))=ln(a)*a^x
l'Hopital's Rule: Unbelievably useful for taking limits.  Basically if you are ever in the case where the lim of two functions turns out to be 0/0 or inf/inf, then take deriviates of the top and bottom and take limits again...

At this point the lecture notes have a discussion of 1st order differential equations that is out of place.  These equations have not been covered yet in the notes, and even though this discussion is brief it detracts from more important material.  Lecture notes also have a long digression on stock returns, capital gains, and dividends.  This is an important economic application of the material being taught, but should be covered by tutors in the QM0 tutorials where it can be gone over at a slower pace...

Optimization: Recall geometric interpretation of derivative: the value of a derivative at a given point tells you whether the function is increasing of decreasing at the point:
  • If df/dx>0, then the function is increasing
  • If df/dx<0, then the function is decreasing
  • If df/dx=0, then f has a critical point
At this point I like to draw pictures to help me remember that d^2f(x)/dx^2>0 (<0) implies that the function is convex (concave). Which leads to the corresponding definitions of maximums and minimums of functions.  I like to point out the main ideas both graphically and in terms of the FOC and SOC on derivatives.

Taylor Expansions: Important topic that the lecturer should spend more time laying out the details.  Re-empahsis should be placed on the Taylor Expansion in the tutorials.

Concavity/Convexity and Quasi-concavity/Quasi-convexity of Functions: I never remember the  derivative or algebraic definitions for these terms.  Best to draw pictures!  Three functions to remember f=x^2 (convex), f=ln(x) (concave), and f=x^3 (quasi-convex and quasi-concave)

Rules of Integration: Emphasize the area under the curve interpretation of an integral.  The rules for integration are easy IF you know your rules for differentiation.  The two processes work in reverse.  When taking an integral of f(x), think what function would I need to take the derivative of the get the function f(x).  Don't forget about the arbitrary constant!

Wednesday, September 15, 2010

Phd Induction...

Had a nice meet and greet with the rest of the first year PhD students here at Edinburgh.  Also had my first meeting with my supervisor Ed Hopkins today...I found out that he was supervised by Alan Kirman!  Fantastic...that makes Alan Kirman my intellectual grandfather (so to speak)...

Didn't get a chance to talk much about my research agenda, but we will meet again in a few days time to discuss how I plan to move ahead over the next year.  I am now thinking that I will go heavy on the empirical work over the next year (with maybe a little of my own theory sprinkled throughout)...

A Nice Summary of Micro-foundations...

I link to a discussion paper on Micro-foundations from Tinbergen University.  It nicely summarizes the debate on what constitutes "proper" microfoundations for macroeconomics. 

Tuesday, September 14, 2010

Today's Morning Run...

Monday, September 13, 2010

Network Position and Interest Rates...

The title says it all: Systemically important banks get better terms for their overnight borrowing

Basically, they get a hold of a really nice data set from the Central Bank of Norway that covers a three year period that includes the recent financial crisis (i.e., 2006-2009) and using a panel data econometric model they find that banks who occupy key positions within the interbank network in Norway were able to use this position to get better deals on interest rates for their overnight borrowing/lending.  They also found that interest rates depended not only on the total amount of market liquidity, but also on the distribution of that liquidity amongst the market participants.  This suggests that banks with surplus liquidity are able to exploit this market power in order to get beneficial rates.  Finally they find that the aforementioned effects on interest rates were stronger during the recent financial crisis than in the period leading up to it.

Excellent Research Resource on Financial Networks...

Anyone interested in financial network analysis should check out Kimmo Soramaki's blog...

Quote of the Day...

"Once the paper is written I put it aside for a couple of weeks. Papers need to age like fine cheese - it's true that mold might develop, but the flavor is often enhanced."
-Hal Varian

Breaking: Kronecker Product of Bipartite Graphs is Disconnected...

Kronecker Product of Bipartite Graphs is disconnected.  This means that the Kronecker product of N-star graphs, which typically arise in the economic networks literature as the equilbrium network in situations where there are strategic substitutes, is disconnected.  Is this relevant? I think so, but haven't quite figured out why yet...

I want to make some type of statement like: strategic susbsititutes makes the network structure not scalable (in some yet to be defined sense), while strategic complements allows the network structure to be scalable.

Relationship Between Kronecker Graphs and Economic Theory...

As I am reading through material on Kronecker graphs I am thinking more deeply about the empirical properties of large graphs that that Kronecker graphs are able to capture, particularly the densification power law, and shrinking/stabilizing diameter.  It occurs to me that the densification property is actually implied by a number of micro-founded models of endogenous network formation.  In the economics literature, the key to achieving this densification process is for there to exist strategic complements in the link formation game (i.e., I want to form links if other people are also forming links).  In such games, the complete nework (which is as dense as a network can get) is typically an equilibrium network.  In sum, in the presence of strategic complements one should expect the network to become more dense over time.

In these economic network games with strategic complements it is generally assumed that agents have complete information on strategies, the number of other players, etc.  The ability to achieve complete network connectivity and thus maximum density depends on complete information.  However, in the real world, agents make link formation decision based on local information, and as such complete network connectivity and thus maximum density will not be very likely in practice (even if it were desirable in theory).  Here is an interesting question: is there a link between local information/asymmetric information and the power law densification process that shows up in the empirical data?  Do networks formed by self-interested agents acting on the basis of local information about link options densify according to a power law? A related question: how, if at all, are Kronecker graphs related to the behavior of sefl-interested agents?  To be continued...

Local Info v. Asymmetric Info...

For those of you with too much time on your hands...

On this very windy, and very Scottish morning here in Edinburgh, I am trying to work out what the difference is between "local information" and "asymmetric information"?  Is there a difference? Or is local information simply a case of asymmetric information? Do agents acting solely on local information create a situation of asymmetric information when they interact?  I think the answer to this last question is clearly yes!

A classic case of asymmetric information is Akerloff's lemons in the used car market.  It is typically assumed that the dealer has more/better/more accurate information about the used-car than does the buyer and thus we get an asymmetry of information.  Is it fair to say that the buyer's information set is a subset of the dealer's information set?  I will have to go back and review my Akerloff Adverse Selection notes on this point.

Local information seems to me to be a different beast.  With local information, two agents have different information sets.  There might be some overlap of information sets or there might not.  I think for local information to create a case of asymmetric information, the two information sets must overlap in some non-trivial way.

The image I have in my head is of two hill-walkers wandering around the highlands at night wearing the same brand of head torches (so that the amount of terrain that each can see at any given time is the same...no hill-walker has a technological comparative advantage in info gathering).  It is pitch black so the only information they can gather about the terrain is from what is illuminated by the head torch.  As they each wander around they are collecting information about the terrain locally because of their limited vision.  If they happen to wander over some of the same terrain, then they will have this information in common (that is their information sets will have some overlap...think 2D Venn diagram).  However they could even have information sets that are entirely disjoint (maybe one is not really a hill-walker and prefers to faff about in the valley, while the other is running around on the crags).  If they were to encounter one another in their wanderings, would this interaction be a case of asymmetric information?  I think it depends.  If one or both hill-walkers were interested in heading in the direction that the other had already been, then this is clearly a case of asymmetric information.  Perhaps they would even decide to trade information (assuming they had a technology that would allow them to do this).  However if they were interested in heading off in different directions, then this might not be a case of asymmetric information because neither hill-walker has information about where the other is going (there would also be no reason to trade).

Am I right to think about local and asymmetric information in this way?  Am I over complicating things by trying to make some distinction?  I think this is relevant to my research because financial institutions clearly use their local information to try and create a situation of asymmetric information that they can exploit for profit and trade (which creates financial inter-linkages or more dense financial networks).