What’s the LMS Worth?

Herein, against my better judgement, I wade into the Great Instructure social media wars of 2019.  Last week, Instructure Inc., the publicly traded (NYSE: INST) company  announced it had agreed to go private and sell itself to private equity firm Thoma Bravo.  For people who teach in higher education this is big news. Instructure, is the current name for the company founded in 2008 that created and sells the Canvas LMS. Canvas in the last decade has toppled the previous king-of-the-LMS’s, Blackboard. Canvas is now widely reported to have largest market share of higher ed LMS market at least in North America. Moodle, the open source system, appears to dominate outside North America.

The announcement triggered a great deal of, let’s call it discussion, on social media, particularly Twitter. A lot of has gotten nasty and heated.  On the surface, the discussion seems to be about questions regarding what Instructure (or Canvas, or the data Instructure has collected) is “worth”.  Specifically, is it worth the $2billion Thoma Bravo has valued it at and why would TB pay that?

Underlying the valuation question though, is the real concern.  Can we discern the plans and future for Canvas (and thereby schools, instructors, students, the higher ed system, pedagogy, etc) from this transaction?  There’s roughly two camps. Both camps seem to think $2 billion is a big number.  I don’t but I’ll explain that later. One camp seems to be arguing that the $2 billion is perfectly justified as a valuation for Canvas as it is now and as an ongoing successful business and therefore there’s nothing to be concerned about here, nothing to see, just move along.  The other camp is seems to see $2 billion as a very big number and a clear indicator that Instructure’s new/future overlords will be monetizing the (relatively) massive database of user/student interactions (Instructure’s own claim as to it’s massiveness) and therefore putting students/faculty at risk from nefarious surveillance and profiling via AI (artificial intelligence and algorithms).

What I want to do is clarify some mistaken ideas/concepts that I see a lot of my education friends (and not so friends) arguing.  What’s been argued, by both camps at times, is not good economics or well informed finance. I’m not going to name folks here nor call out any one in particular. That’s not my intent. I’m hoping to clarify some thinking.

What’s a company worth?

Both camps seem to be arguing the “worth” (in precise economic/finance technical terms it is the “valuation”) of the company using the wrong theory or models of how valuation/worth is established.  The implicit model being used by all is familiar in economic/finance theory. It’s the idea that the current value of an investment (i.e. the purchase price of the company) should somehow be justified as expected present value of the future cash flows of the company from doing business.   That’s understandable. It’s a decent way to start evaluation of investment decisions – particularly inside companies when they decide to invest in something like a new machine or an expansion. It’s not the only consideration. There’s strategic considerations too.

So as an example  we’ve heard arguments that Instructure has been growing, generates cash, and has margins of 70%, so the value is just reasonable and therefore there’s nothing for the education community to worry about.

On the other hand, some have essentially argued that the only reason private  equity would pay this and/or the only pay they can recoup their money is if they monetize the data and that is presumed to lead to nefarious outcomes.

Let me clarify. The company was purchased, not the software and not an asset. The company. There is only one real-world way that valuations of companies are established: Will somebody pay a higher price later for this same company?  Let’s be very clear. This is a private equity deal. PE funds do not run companies. They do not sell things. They buy and sell companies. Period. That is all they do.  The only customers they have are the other PE firms or corporations or banks that they sell their  companies to.  Period. Thoma Bravo is not in the education or edtech business. They are in the buying-and-selling software companies business. That’s it. And no matter what they say about “being in it for the long run”, they aren’t. PE firms generally look to recoup and sell the business inside of 5 years, preferrably a lot sooner.

Conclusion #1:  No matter what any manager at Instructure or TB tells you, the needs of higher education are no longer the driving force.  The driving force is putting together a nice story supported by anecdotal financial data that leads to some future firm paying TB way more than $2b in a couple of years.

So is Instructure worth $2b?  We’ll find out if and when TB sells it. My guess is yes, TB will definitely flip this in a few years for substantial profit, assuming the bottom doesn’t totally drop out of the LMS market. (a small but real possibility).

Any argument you make about the deal based on business fundamentals is nonsense and fantasy. It’s part of popular econo-myths. Before you try to argue with me on that, do this one test: can your implied model of valuation explain why Uber went public at a valuation of ~$100 billion when Uber has never made money, is cash negative, and has no prospects of making money?  Can your model explain WeWork?  If you still don’t believe me, I suggest researching a little with Professor Scott Galloway (@profgalloway) about how valuations and funding happens real world these days.

What’s next?

What can we expect? Will the data be monetized? Will it be sold off piece-by-piece? Will Instructure/TB now invest heavily in all kinds of accelerated innovation? (Ok, I just threw that last question in for laughs. Of course they won’t. Real innovation costs money, time, and work). Really, we don’t know but there are some high probabilities based on the new capital structure and owners.

First off, there’s the possibility of some good old fashioned battle of the funds. We know very little about the specifics of the Instructure-TB deal. That’s how private equity works. It’s private. It’s not transparent. However, it seems that Instructure has 35 days (counting holidays) to find a better deal. Some other funds, hedge funds in this case, have taken positions in Instructure and they don’t think $2 is enough.  Typically the only people who come out ahead in these situations are lawyers, banks, and partners at the biggest funds. Little shareholders and the rest of the human race, not so much.

Once the deal closes, the priority at Instructure will be clear and it has two parts. First priority is get the money (cash) back to TB. I’ve heard it said on the Twitters that TB is putting out $2b of it’s money to buy Instructure. Again, we don’t know details for sure, but that’s almost certainly false. PE deals don’t work that way -especially with a company like Instructure that generates a healthy positive cash flow, is profitable, and has little debt (AFAIK).  Typically the playbook is that the PE firm buys the company largely with the target company’s own money.  In this scenario, the PE fund (TB in this case) puts up a relatively small amount of their own cash up front. They take a very short-term bridge loan from a friendly bank to get the total $2b in cash needed to buy out the shareholders. Once the deal closes, Instructure Inc. then is directed by their new owners, TB, to get a loan from a bank secured by the company’s assets. The proceeds of that loan are then paid as some kind of “special dividend” to the new owners to retire their loan. The PE fund has a small at-risk stake at that point. Management fees or sell-off of some assets in the first year can often pay back that cash. By maybe the end of the first year, the PE fund has gotten all it’s cash back and is playing with house money at that point. The target firm (Instructure in this case) is likely a lot more debt-laden than before with a lot less free cash flow.

At that point, we consider the other priority (don’t worry, these folks can multi-task so you’l hear this one right away). Namely, the big priority is to develop a story that leads to another big pocket putting out well more than $2 in a few years. Tell the story and tell it hard. Once they’re private, that becomes a bit easier. Less real data has to disclosed since they’re no longer public, so it’s easier to be selective with the data and put your own spin on it without fear of those pesky shareholder suits and the SEC (is anyone actually still afraid of the SEC?).

PE firms, like Venture Capitalists or hedge funds, aren’t looking for nice safe returns on their money. You and I would be ecstatic to get annual returns of 10-20% on our retirement funds. These funds look for more. They want multiples of the initial investment. So they’re looking for deep pocket buyers that can and will spend not $2b, but maybe $4b or $6b or more in just a couple years.  The PE fund wants a big exit and once the deal closes the only thought is the exit. Running the business is only important to the degree it helps tell a story that helps them exit.

Why would anyone pay that in a couple years from now?  Go back up to the section on “What’s it Worth?”.  There aren’t that many routes for exit for a PE firm:

  • do an IPO (initial public offering) -not likely here since they just took it private – obviously the public market wouldn’t value it high enough
  • find a bigger sucker PE fund – the story of why there are untold, untapped riches becomes critical
  • find a really big, deep pockets corporation that wants to add to it’s portfolio of businesses thinking this will add that magical “synergy” to its other businesses.  This is a possibility for Instructure, but the likely candidates are:
    • Google, FB, MSFT, Amazon, or Apple – the people trying to collect everybody’s data about everything in the hope of controlling/monetizing everything.  A story of the value of the data and the ability to predict the future lives of students could lead them to write a big check.
    • Textbook publishers – OK, there are only two left, Pearson and Cengage-McGraw Hill.  They could fall in love with a story of becoming the single source books-homework-courseware-LMS provider. In fact, they’ve tried the LMS before, but couldn’t do it themselves. They might choose to buy in. I’m not sure their pockets are deep enough though.
    • When all else fails, merge. Instructure could merged with Bb or Brightspace using some other PE fund’s money.

Whatever route leads to the exit, that’s the priority now at Instructure. In my opinion, all those avenues are fraught with very good reasons why colleges, professors, and students should be concerned.

Where will the money come from?

Another thing I read on the Twitter was the suggestion that Instructure is somehow impervious to the all-too-common private equity strategy of carve-it-up and sell off the parts.  Nonsense. That tweet came from somebody who purports to know and advocate for private equity but apparently, judging by their tweet, thinks Hollywood movies about whores are primers about finance.  I won’t deal with that aspect of the tweet other than to say that misogynistic tweet was all the evidence to convince me the dude has spent too much time in either tech or finance culture. Unfortunately, he’s not very skilled at the private equity portion. It takes little imagination to see how Instructure could be carved up and pieces sold off. I’m not saying they will. I’m just saying it’s a piece of cake. They’ve made 2-3 acquisitions in recent years. Reverse those and sell. They’ve already told everyone they’re positioning for a possible split-off. They’ve stated they’re separating the codebase for Bridge from Canvas.  Add to that, any business with multiple services, even when sold to the same segment, can be carved up. It doesn’t even take much imagination to do it. All it takes is a willing buyer. And all that takes is a plausible story about the riches at the end the rainbow.

Education is not THE Story Anymore

We in higher education have a tendency to think we’re important as a market. We’re not. For a long time, edtech companies and Silicon Valley have fed that fantasy. We think in terms of the edtech “market” and think it’s attractive. In truth, it’s largely failed to meet to meet SV expectations.  The LMS market is mature. Very mature. Most LMS’s are really based on 1990’s architectures ported to the Web. Canvas was an innovation in 2008 by being cloud based. But product wise, all of them are still largely the same conception of the product as 20+ yrs ago. Everybody who needs an LMS has one.

Yes, Instructure has had decent growth numbers (not sterling by SV standards, but good) in recent years. But finance is all about how are you going to top that going forward. Finance doesn’t look back. Truth is, Instructure or any of the LMS’s are going to have a hard time finding big new sources of revenue. There just isn’t much left in the higher ed budget for their stuff. Even the data analytics for learning part has failed to take off revenue wise. That’s why data mining for AI/Algorithms, monetizing the data to non-education folks, is so tempting.

Yes, any of these LMS firms, or publishers for that matter, could have had decent solid, satble, modestly profitable businesses that were mature. But that’s not how finance capitalism works.  Instructure isn’t an education tech company anymore. It’s just a software company and data processing service that happens to get its data from college and university students.  It will likely be managed that way.

FUD for thought?

I should put a word in about FUD.  Not sure if I introduced it into the conversations on Twitter or somebody else did. I didn’t realize the term was new to so many.  It’s an acronym that stands for Fear, Uncertainty, and Doubt.  The original usage that I’m familiar with dates back to software deals and business deals in the 90’s. FUD was something some firms tried to create in the market about their competitors. For example, back in those days, Microsoft was often accused of putting out PR releases and statements trying to create FUD about whether Linux or open source software was any good.  A more recent example in edtech world would be a few years ago when for-profit publishers would spread stories casting doubt (FUD) about whether OER was any good. They helped perpetuate doubts about the quality of OER in order to justify their high priced books. Nowadays, those publishers have tried to enclose (“embrace and extinquish” – another old Microsoft strategy) OER instead of spreading the FUD.

The thing about FUD is that it usually isn’t specific or justified.  It’s an attempt to cause people to feel uncomfortable about things.

The ironic part now is that I don’t think the concerns expressed on Twitter about the Intructure deal are FUD.  What the concerns have shown is there’s reason to be uncertain – the details aren’t disclosed and won’t be. There’s good reason to be doubtful: private equity deals very often do end up butchering or hampering the core business.

And there’s reason to be fearful:  that giant database of student data has value to big players in the surveillance capitalism industry. There’s the big obvious ones: Google, MSFT, Apple, Amazon, and FB. But there’s a host of other hidden players – data brokers, Palantir, banks, and many others, the lords of the algorithm cults. They often have deep pockets or they’re backed by funds with deep pockets. All Instructure/TB needs to do is convince them of a story about how Instructure’s data can add value to their existing trough.

A Final Lesson

I’ve argued extensively that higher education (perhaps all education, but I’m not expert in K-12) is best organized as a commons. The boundary between commons and the market-oriented capitalist economy is tricky. Capitalists and market-thinkers inevitably seek to enclose the commons, privatizing benefits and externalizing costs onto society.

This boundary is particularly tricky in the edtech world. If there’s one lesson I hope to impart to people in education, it’s the need to do your due diligence on your vendors and “partners”.  Current product offerings aren’t enough. Product roadmaps matter. Plans matter.

But most of all, capital structure matters. No matter how nice the people at the vendor, no matter how good the values of the hired managers are at that edtech “partner”, ultimately it’s capital that calls the tune.  That’s why it’s called capitalism.

The Mastodon In the Room

I’m writing this post because I can’t fit my thoughts into 500 characters. This is a very loose set of (probably) ill-connected thoughts triggered by discussions on Mastodon.social.  If you don’t know what Mastodon, it’s a kind of open source, decentralized/federated alternative to Twitter. Sort of.  Of course some have said it’s an alternative to Slack. Sort of.  Who knows?  This post is an attempt to add to that confusion.  If you’re still interested but don’t know Mastodon, check out Maha Bali’s piece on a Social Network of Our Own.

What prompted this post was my own post on Mastodon a day or so ago:

mastodon

Part of what I love about Mastodon (as compared to Twitter) is the 500 vs. 140 char limit.  It makes a huge difference. It enables more thoughtful posts – as in they not only express deeper/richer thoughts, but reading the posts often requires more thought.  They’re more engaging.  It makes a very happy medium IMO between Twitter-like “conversations”, which are really just rapid exchanges of 1-liner quips, vs. the blogosphere which is more like an exchange of letters.

First some semantics. I’m using the following words to mean:

  • Followees:  the people a person “follows” on the social media. In other words the people I’m interesting in reading their stuff.  This is in contrast to followers who are the set of  people who read what I write.
  • Stream:  the reverse chrono list of posts that person reviews as their primary way of finding out what their followees said.  In Twitter, it’s the main stream you read.  Mastodon is different because there’s the Public Stream of all things (not really accessible except via API in Twitter) and the Home stream.  The Home stream is closest to the Twitter main stream.
  • Scale:  more of the same.  Example: If I add 50 more followees who are all interested in the same types of things such as Open Ed, I’m scaling up.
  • Scope: adding stuff/things/followees who are different from the rest.  Increasing scope means increased heterogeneity.  For example, if I already have 50 followees that tend towards the open ed-ish, and then I add 20 folks who don’t talk open ed but talk about games and then add 10 more who talk football, I’m increasing my scope.
  • Filter:  a rather tech term that allows for creating a subset of the stream by applying some boolean logic to some aspect of the toots/tweets. Filtering is often done on tags but could conceivably be done on text items or names.
  • Rooms:  a non-tech term used to describe the experience of having/seeing/speaking with a group of particular tooters/tweeters

Here’s what’s occurred to me so far:

  •  Scale in the Stream:  Twitter’s small, short 140 char style makes it possible to scan/review the a stream a lot more feasible when there are larger numbers of contributors to your stream. Of course, if you have enough folks you’re following on Twitter, the primary stream you see becomes difficult to deal with but mostly just because the sheer volume of tweets per minute.   The mix of short and longer toots on Mastodon, make it harder to cognitively deal with a stream much sooner as you scale up followers.  This is because the longer posts encourage more cognitive engagement and (at least amongst my peeps) more responses that are at least cognitively linked.  I suspect a smaller number of followees (people you follow and hence read in your home stream) will trigger a  feeling of “maxed out” in Mastodon than in Twitter.
  • Scope in the Stream: This problem of cognitive load & time involved to process the stream gets particularly bad if you increase scope.  I can easily process two tweets on different subjects that are juxtapositioned.  They tend to stand alone and they’re short and shallow cognitively.  Toots are much harder when scope increases.
  • On the counter side to increased cognitive load is the need to have some openness to new topics, new speakers, etc. That’s often where the serendipity comes from.  We don’t want to last that aspect because then it just becomes an echo chamber.
  • I don’t think filters can get us the “room” experience.  Filters are text-specific somehow: tag, keywords, etc.  Further, setting up filters must be done in advance but that then precludes the serendipity and closes off the open.
  • Jeroen Smeets asked if what we were (I was) talking about was creating a Storify type thing.   In some ways, yes, it would be like creating a Storify, except Storify is dead – it’s an archive of the past.  I’m interested in viewing my live stream in ways that give me the storify experience in real time.

So I’ve come up the idea of a “lens” or “lenses”.   I’m aware that I might be reinventing something called lists, but since I’m not really familiar with Twitter “lists”, so be it.  Won’t be the first time I’ve reinvented the preexisting.

Let’s start with the public stream. It’s everything that’s coming through the network. While I like the ability to see the public timeline stream on Mastodon, as soon as Mastodon users start to achieve really large numbers it will be useless for direct human reading except for the occasional dip into a small segment of it just  for grins. Nonetheless, the public timeline stream holds great potential because with open source, who knows what folks might create that can make use of that computer-wise some day.

A lens is a way that a user can view the giant public timeline.  On Twitter, there’s only one lens per user.  That lens creates your home timeline stream from the all-public stream.  The primary element used to create the Twitter lens for each user is the list of your followees.  If a Tweet in the big timeline involves your followee (from, to, mentioned) it becomes visible through your lens.  This is the original functionality that we fell in love with on Twitter.

What happened?  Well two things.  First, Twitter expanded your lens without the your involvement by using algorithms to select tweets to put in your stream even if you didn’t want to follow those people. A lot of this advertising and “promoted tweets” related. Part of it is because Twitter as a company  also needed to boost the amount of time you spent on your stream.  All of this is because of business model & $.  Mastodon should be able to avoid this because there’s no VC/investors to be made rich (although we need to make sure @gargron and others live a decent life!) and because the decentralized federated servers model allows what I expect will actually be a lesser cost per toot in the total system than the centralized system of Twitter.

But there’s another thing that expands your Twitter lens.  Twitter needs/wants numbers:  users, tweets, views, minutes spent. That’s what they need to monetize. To do that they enable trolls.  Suppose for a moment there are not-quite-human like entities we’ll call trolls and their mission is make people miserable on social media.  Trolls can find you and force their way into your stream – force their way through your lens.  Your only alternative is to be reactive and block everything the troll ever says in the future.

To boost numbers, Twitter also encourages the use of bots.  Your human friends have a cap on how many tweets than can make per hour or minute.  Bots don’t. Your stream starts to get polluted with trolls and bots. You get tired. You feel attacked.

So how do we avoid this?  First, we need to build a culture in Mastodon that numbers don’t matter. It’s about the conversation, not the monetization.

Second, we -note how I bravely use the royal “we” knowing I can’t code this thing, 😉 – might want to pay attention to code or sign-up provisions to verify that there’s a human at the keyboard/phone making those toots.  Machine made toots will just turn the place into a sewage treatment plant.

Third, I’d like to see a two-level lens created.  The first lens is the existing Home stream: it’s a subset of the public timeline stream where all I can see are those people I follow and anything directly connected (like a mention or reply or boost) by/about one of my followees.  This lens should be done at the server level.  It’s what I should get back when I refresh.

But what if I could define for myself (user defined) a second-level lens:  a subset of  my followees.  In the user interface, I can turn the secondary lens on-or-off. I could define 2, 3, or so different second level lenses.  Selecting a lens means I see my home stream as if I only had that subset of followees as my entire stream.  This would enable folks to deal with their social connections as they would in real life. My home stream is the comments of everybody I know and care about in my life. But I am surrounded primarily by academics when I’m at work – it’s my academic secondary lens that’s activated there.  When I go home, I turn off the academic lens and put on my family-neighbors lens.

A user-defined lens would also allow me to more frequently watch/monitor my stream for the people that I consider time sensitive. For example, for me the folks I think of as “open ed academics” are people I want to monitor frequently during the day regardless of what they say.  The folks I follow that are more techies – say WordPress or Mastodon developers, are folks that I want to know what they’re saying/thinking, but I might only want to see / hear it once a day.  I could do that.

The lens concept, by being user/viewer defined, also means we don’t have to have social agreement a priori on a hashtag, or who’s in or who’s not.  I see the room as I want to see it.  I might think of you as part of my “open ed” lens.  Assuming we follow each other, you might want to see my stuff as part of your “white guy blowhards” lens.  To each their own.

The lens concept also allows a user to see less of a possible troll without necessarily having to permanently block them.

Well, that’s my $0.0185 worth.  (inflation has reduced the value of two cents).

Running Errands for Open Learning Ideas

This is my presentation for Open Ed 2016 in Richmond, VA.   It’s kind of a progress report on the LCC Open Learning Lab project.  It’s very much a work-in-progress (the Lab project, not the presentation).   Assuming the universe cooperates, I’ll follow-up on this posting of the slides with a few long-form posts explaining what I said and going into some more detail.

If perchance your browser or Internet connection takes too long to load the above presentation, you can download the file here.

 

Coming Out Party for OpenLCC

So the journey that started with creating this blog back in 2008 is taking another big step.  Today I’m launching and announcing the OpenLCC network (openlcc.net).  Let me retrace a few steps and explain.

I started this blog with two purposes: teach myself what this “blogging” bru-ha-ha was all about and to see if putting my thoughts about economic news in public might be of interest or use in teaching my classes. Please keep in mind that back in 2008 the economic world was collapsing and we here in Michigan were at ground zero. The textbooks didn’t really have much to say about it. Well it was a rousing success. Students liked it. I liked it. I was hooked. And hooked is probably the right term. I kept going for bigger and bigger fixes. Next it was a self-hosted teaching portfolio & syllabus site at jimluke.com. Then it was trying to create a mini-MOOC (Little Open Online Course?) for my principles courses. Student success rose. Engagement rose. It was easier to manage. Then it was getting the students in on the fun.  I let them blog and write in public for my two gen ed -oriented courses.

All this led to an opportunity this year to take some “re-assign time” to create an Open Learn Lab here at Lansing Community College. By the way, for the non-academics, “re-assign time” is a polite way of saying the school lets you cut back your teaching load by the equivalent of approximately a day a week in return for you devoting 2-3 days per week working on some additional project.  Anyway, I did it. And now we’re doing it.  The Open Learn Lab is modeled after the Domains Of One’s Own programs that were pioneered at University of Mary Washington and now at several (20-30?) major universities.  We’re the first community college.  I’m really excited.

Of course this means I’ll likely be blogging about some teaching, higher ed, and open learning topics now. But I hope to also keep blogging about economics (I still do teach some classes!).  Anyway, here’s the presentation for the “coming out” party informational presentation on campus. Like most of my stuff, it’s Creative Commons licensed, BY-SA (attribution and share-alike).  If you want to download the PPT or speaker notes, click on the little gear.

 

Solar Power Looking Brighter Economically

John Quiggin of Crooked Timber sends us to Grist.org for “Solar Gets Cheap Fast” for good news about solar power.  The cost of producing solar photovoltaic cells (the silicon-based cells that convert sunlight to electricity) has been declining consistently at 20% per year since the early 1980’s.  Solar power is now close to the point where it is cost-competitive with fossil-fuel (coal, natural gas) and nuclear.  When we consider that any new coal-fired power plants will take 5 years or more to build (nuke even longer), then solar is now in the competitive horizon.  This is great news.

It really shouldn’t be news though.  We should have expected it.  Anytime a new product/technology goes into production, per-unit costs generally decline.  And, they generally decline at a predictable rate.  Two micro-economic phenomena combine to produce this predictable declining cost curve.  The first is often described in principles textbooks (although often over-stated):  economies of scale.  As production volumes get larger, often (not always) per unit costs decline because cheaper production technologies become feasible – it’s the phenomenon of mass production.  But another curve is involved.  It’s called an experience curve. Basically an experience curve summarizes how, even with using the same scale technology, as producers get more cumulative experience with producing the item, they produce it more efficiently.  In plain talk it’s called learning-by-doing.  It’s a staple of many business strategies, particularly in electronics.  While the specific improvements aren’t foreseeable ahead of time, the fact that costs will decline is predictable.  In other words, it’s predictable that we will learn.

Socially and economically, the arrival of wide-scale solar electricity generation is a good thing.  Solar electricity generation doesn’t create green house gases or other pollutants. It can be more effectively decentralized, relying less on huge power plants. The systems involved aren’t dependent on the kind of complex safety systems that make nuke power and coal dangerous. It doesn’t require a huge distribution and logistics network to mine/drill and transport a scarce natural resource like coal or gas.  And, solar installations can do double duty.  Unlike growing plants for bio-fuel or strip mining for coal, solar can be generated on top of existing buildings.   Critics often claim that solar is unreliable because “the sun doesn’t always shine”.  But solar system fit well with the demand for electricity.  Demand for electricity peaks in summer when the sun shines the most (think air conditioning). So the condition that creates peak demand for electricity is exactly the condition that generates solar power.  Further, newer solar systems are increasingly capable of generating electricity (albeit not as much) from just daylight even when bright sunlight is not present (does your solar-powered calculator stop indoors?).  Personally I’d rather trust the sun to rise each  day and provide daylight than to trust that engineers have perfect control of the safety of an inherently dangerous and polluting power plant (Fukushima anyone?).

The arrival of cost-effective solar power is also an object lesson in why government subsidies are often justified for new technologies.  Often, when new technologies are invented, the costs (“business case”) are too high to be practical or competitive with existing alternatives, despite the conceptual attractiveness.  We have a “new technology chicken-and-egg”.  Private investors and private firms won’t touch the new technology because it will take too long for costs to decline to a point where they can make the kind of high returns they want.  It’s too risky for them and too-long range.  Private investors and corporations really don’t think very long term.  But, until somebody actually begins producing the item we don’t gain the benefits of economies of scale or learning experience.  It’s at times like this that governments can play a great role.  Governments, by borrowing at the lowest interest rates, can take the long-run view.  They can invest because the benefits will be social and benefit the larger economy later.  Governments have, in fact, been key to creating new technologies and economic growth throughout the last several hundred years.  The telegraph, the telephone, electrical generation/distribution, canals, railroads, improved ocean navigation, computers, networks (including the Internet and World Wide Web that brings you this story), automobiles, aircraft and airlines — all these were dependent upon government early on and would not have happened had it not been for government.

That’s not to say government should always own, operate, and scale up the businesses that do it.  There’s a variety of mechanisms for government to seed and feed new technologies.  But that’s a different discussion.