Winning the Hardware Software Game Winning the Hardware-Software Game - 2nd Edition

Using Game Theory to Optimize the Pace of New Technology Adoption
  • How do you encourage speedier adoption of your product or service?
  • How do you increase the value your product or service creates for your customers?
  • How do you extract more of the value created by your product or service for yourself?

Read more...

Latest Comments

  • Anonymous said More
    Great explanation for the relationship... 4 days ago.
  • Anonymous said More
    nice analysis, thanks Wednesday, 21 October 2020
  • Anonymous said More
    The fact that CBD from marijuana is... Sunday, 14 June 2020
  • Anonymous said More
    This was excellent and extremely... Tuesday, 21 April 2020
  • Anonymous said More
    Well written. Well constructed. Tuesday, 13 August 2019

Generating Value from AI Systems

Essential Components

Feedback Loops

Stated Benefits of Open Source Systems

Focus on Projects that Benefit Humanity

Mitigate Power of Single Entity

Benefit From and Improve the Technology

Attract Elite Researchers

Why Do I Think OpenAI Was Established As Open Source?

The More Obvious/Discussed Justifications

The Less Obvious/Discussed Justification

 

In Part 1 of this analysis, I provided some background information on AI as a foundation for the discussion. In this part of the analysis I continue on to discuss why I think Elon Musk designated OpenAI as an open source entity.

A copy of the full analysis can be downloaded by clicking on the link at the bottom of this blog entry. 

 

Generating Value from AI Systems

Essential Components

I’ve mentioned several times throughout the analysis that AI technology involves three essential components: IA algorithms (software), AI platforms (hardware), and big data. In this section I describe the nature and use of these components in more detail.

I like the way Neil Lawrence describes the AI system in “OpenAI won't benefit humanity without data- sharing.” He uses the analogy of cooking, where AI algorithms are the recipes, the data are the ingredients, and the platform is the stove or oven.

Anyone who has tried to come up with an original recipe will tell you that it generally needs to be tweaked before you come out with the ideal output. Similarly, researchers design AI algorithms, test and train them by running data through them, then tweak them to improve their performance.

Generally speaking, the better cooks are those with more experience, and they tend to be the ones who come up with the best recipes. Of course, occasionally unknown or unpracticed chefs come up with excellent recipes, but that’s not the norm. Similarly in AI, the better, more experienced researchers are the ones who will probably generate most of the advancements in AI. However, that does not preclude the possibility that some unknown savants will be able to come up with advanced solutions on their own.

Also, in cooking, better ingredients produce better dishes. Similarly, in AI, higher quality data lead to better results – as the saying goes, garbage in, garbage out. At the same time, AI algorithms become more accurate (trained) as they run more data. This means that having access to larger volumes of data will generate more accurate algorithms. So when it comes to data, both volume and quality are important.

Finally, when cooking, the sizes of the ovens constrain the volume of food that can be produced. Similarly, with AI algorithms that need to run through large volumes of data to become properly trained, larger, more efficient hardware systems produce results much more quickly than do smaller systems.

Researchers

The field of AI in general and machine learning in particular is relatively new, so the pool of researchers who have been able to generate experience in this area is relatively small.  As described in the introduction to this section, it is likely that most of the new advancements in AI will come from experienced researchers. As such, any organization that seeks to excel in the area will want to recruit from this small pool of experienced researchers. As competition in machine learning has heated up, it is thus understandable that demand for individuals from the small group of elite researchers has intensified tremendously. The following excerpts iterate these points.

From Cade Metz in “Facebook Open Sources Its AI Hardware As It Races Google”:

… [T]he community of researchers who excel at deep learning is relatively small. As a result, Google and Facebook are part of an industry-wide battle for top engineers.

From Andre Infante in “Microsoft vs Google – Who Leads the Artificial Intelligence Race?”:

Because deep learning is a relatively new field, it hasn’t had time to produce a large generation of experts.  As a result, there’s a very small number of people with expertise in the area, and that means it’s possible to gain significant advantage in the field by hiring everyone involved.

From Precious Silva in “Facebook vs Google: Race to Build the Next Artificial Intelligence System”:

According to a report by Top Tech News, Google, Facebook and similar large companies are looking for and hiring scientists related to artificial intelligence. The companies appear prepared to invest considerably on development of the technology.

From Dave Gershgorn in “How Google Aims to Dominate Artificial Intelligence”:

"A very small amount of companies have been trying to hire up a very large percentage of the talented people in artificial intelligence in general, and deep learning in particular,” Manning [Stanford computer science professor] says.

Hardware/Platforms

Until very recently, big data were processed using central processing units (CPUs), which utilize sequential processing. Quite recent advances in computer processing have been achieved using graphics processing units (GPUs), which employ parallel processing. GPUs have enabled organizations to process data exponentially faster and more efficiently than they were able to do previously. Wikipedia describes GPUs as follows:

A graphics processing unit (GPU), also occasionally called visual processing unit (VPU), is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display... Modern GPUs are very efficient at manipulating computer graphics and image processing, and their highly parallel structure makes them more effective than general-purpose CPUs for algorithms where the processing of large blocks of visual data is done in parallel…

The term GPU was popularized by Nvidia in 1999, who marketed the GeForce 256 as "the world's first GPU", or Graphics Processing Unit, a single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines that are capable of processing a minimum of 10 million polygons per second".

NVIDIA provides a bit more explanation about GPU-accelerated computing and how that differs from more traditional CPU processing.

WHAT IS GPU ACCELERATED COMPUTING?

GPU-accelerated computing is the use of a graphics processing unit (GPU) together with a CPU to accelerate scientific, analytics, engineering, consumer, and enterprise applications. Pioneered in 2007 by NVIDIA, GPU accelerators now power energy-efficient datacenters in government labs, universities, enterprises, and small-and-medium businesses around the world. GPUs are accelerating applications in platforms ranging from cars, to mobile phones and tablets, to drones and robots.



CPU VERSUS GPU

A simple way to understand the difference between a CPU and GPU is to compare how they process tasks. A CPU consists of a few cores optimized for sequential serial processing while a GPU has a massively parallel architecture consisting of thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously.

Cade Metz, in “Google Just Open Sourced TensorFlow, Its Artificial Intelligence Engine,” describes the importance generally of compute power and efficiency in providing an edge in competitive tech environments:

Google became the Internet’s most dominant force in large part because of the uniquely powerful software and hardware it built inside its computer data centers…

Kevin Lee and Serkan Piantino, in “Facebook to open-source AI hardware design,” describe the importance of GPUs specifically for use in advancing AI:

Although machine learning (ML) and artificial intelligence (AI) have been around for decades, most of the recent advances in these fields have been enabled by two trends: larger publicly available research data sets and the availability of more powerful computers — specifically ones powered by GPUs. Most of the major advances in these areas move forward in lockstep with our computational ability, as faster hardware and software allow us to explore deeper and more complex systems.

Richard Waters, in “Investor rush to artificial intelligence is real deal,” reconfirms the importance of powerful processing systems for advancing AI:

“The real battle isn’t being fought over the underlying machine learning technology, it’s in building support systems to make it usable.” These ancillary technologies include the data “pipes” needed to funnel large amounts of information, he [Stephen Purpura, whose own AI company, Context Relevant, has raised more than $44m since it was founded in 2012] says, as well as control systems needed to make sure AI operates within acceptable business parameters.

Big Data

Advancements in machine learning algorithms require access to large volumes of high quality data. Neil Lawrence stresses the importance of the big data component, especially high quality big data, for AI system performance in “OpenAI won't benefit humanity without data- sharing”:

… [I]n machine learning we don’t control every decision that the computer will make. In machine learning the quality of the ingredients, the quality of the data provided, has a massive impact on the intelligence that is produced.



The machine-learning ideas that underpin the current revolution in digital intelligence were all academic innovations. What constrained progress was the lack of data and access to large computational facilities...

This is probably why Facebook and Google have so freely shared their methodologies: they know that the real value in their companies is the vast quantities of data they retain about each one of us.

In “How Google Aims to Dominate Artificial Intelligence,”Dave Gershgorn gives some idea of the data volume requirements.

But all this relies on those initial audio files, which is called training data. This training data is actually made of millions of real voice searches by Google users.

Given feedback effects (discussed in more detail below), the difference between large and very large data sets can make or break the adoption of a new technology. Charles Clover describes this issue in more detail.

While a typical academic project uses 2,000 hours of audio data to train voice recognition, says Mr Ng [chief scientist for Baidu], the troves of data available to China’s version of Google mean he is able to use 100,000 hours.

He declines to specify just how much the extra 98,000 hours improves the accuracy of his project, but insists it is vital.

“A lot of people underestimate the difference between 95 per cent and 99 per cent accuracy. It’s not an ‘incremental’ improvement of 4 per cent; it’s the difference between using it occasionally versus using it all the time,” he says.

Finally, the Internet of Things (IoT) will soon produce such vast quantities of data that only those with the best AI systems will be able to process the data. The advent of IoT will thus massively increase the value of better machine learning AI systems. Theo Priestley, provides more details on this point in “A Series Of Unfortunate Tech Predictions - Artificial Intelligence and IOT are inseparable,”

We’ve been thinking about the Internet of Things all wrong.

  • Big data analytics for IOT software revenues will experience strong growth, reaching $81 billion by 2022 says Strategy Analytics
  • Smart Cities will use 1.6 billion connected things in 2016 says Gartner
  • By 2025 IOT will be a $1.6 trillion opportunity in Healthcare alone says McKinsey
  • 50 billion+ connected devices will exist by 2020 says Cisco
  • Data captured by IOT connected devices will top 1.6 zettabytes in 2020 says ABI Research
  • There are 10 major factions fighting to become the interoperating standard for IOT
Numbers. Numbers. Numbers.

If the predictions are to be believed, there is no way that current analytical solutions will be able to manage that level of information across that size of connected landscape. Of course, no singular platform needs to, but all solutions in the immediate future will require artificial intelligence capabilities. Which means SAP, Oracle, IBM, Cisco and all the rest who have an analytics platform play will have to invest in A.I. research, acquire and finally emerge with solutions based on methods beyond machine learning.

Or risk being left behind.

But these dinosaurs are being left in the dust already, by consumer led companies such as Apple, Google and Facebook. It’ll be one of these, or another with a foothold in the consumer world, where the real A.I. breakthrough will emerge, not academia or the sciences.

 

Feedback Loops

Feedback Loop Specifics

Machine learning is based on feedback loops. The following provide several different paths (A, B, C, D, and E below) that feedback loops take in the machine learning process involving the essential components presented in Figure 1.

1A. Data train algorithms

2A. Greater volumes and higher qualities of data lead to better-trained algorithms

3A. Better-trained algorithms are more accurate

4A. More accurate algorithms are more heavily used

5A. More heavily used algorithms get more training

6A. Go to 3A

 

1B. Better researchers design more valid algorithms

2B. More valid algorithms are more useful

3B. More useful algorithms are more heavily used

4B. More heavily used algorithms get more training

5B. Go to 3A

 

1C. Better platforms enable faster processing of data

2C. Faster processing of data increase speed of feedback loops

 

1D. Open sourced projects attract more/outside researchers

2D. More researchers yield more valid and/or more accurate algorithms

3D. Go to 4A and 2B

 

1E. Open sourced projects attract more/outside users

2E. More researchers yield more training data

3E. Go to 2A

Feedback Loops => Component Synergies

One aspect of the feedback effects associated with the production of advances in AI technologies is the synergies between the three key components (displayed Figure 1). More specifically, the three components interact so that the whole is greater than the sum of the parts. In other words, an organization that has (i) the best researchers; (ii) access to the largest volumes of high quality data; AND (iii) larger, more efficient platforms to run them on could be expected to generate advancements much more quickly than other organizations with only one or two of the three superior components. These dynamics will be discussed further in the next few sections.

Before jumping ahead, however, I want to make the following observation. The recent frenzy exhibited by some of the larger players in the ecosystem (e.g., Google, Facebook, OpenAI) might be due to expectations of an upcoming leap enabled by recent advancements in all three components of AI systems. Dana Liebelson in “Why Facebook, Google, and the NSA Want Computers That Learn Like Humans” verifies that there is this expectation out there.

"There's a big rush because we think there's going to be a bit of a quantum leap," says Yann LeCun, a deep-learning pioneer and the head of Facebook's new AI lab.

There is an implication, then, that the first organization to achieve the leap might end up with a system that exhibits either artificial general intelligence (AGI) or even artificial superintelligence (ASI). Given the nature of feedback effects, the first organization to achieve AGI or ASI could end up considerably ahead of the next closest competitor (assuming that entity is, in fact, able to survive and control its creation). The expectations of (i) an imminent leap, together with (ii) component synergies would be expected to force the top AI players into fierce competition for the best resources (researchers, platforms, and data).

The increasing availability of outsourced platforms and datasets then begs the question as to whether or not a single entity must have joint control (ownership) of all three components to realize the synergies. If not, then it is quite likely that big leaps in algorithm performance could come from an individual entity (out of left field, from some guy in his basement) that is not part of the larger AI organizations. I think that this is one of the fears of the AI community.

To the extent that

  1. The big leaps are built upon the cutting edge technologies, and at the same time
  2. The cutting edge discoveries are kept confidential and not released to the larger AI community,

then it is less probable that this scenario will occur.

And there is definitely reason to believe that even the open source projects will not necessarily make all their discoveries public. For example, Lucy Bernholz, in “Artificial Intelligence and Nonprofits,” notes (emphasis mine):

In an interview posted on the Singularity University newsletter, one of the founding researchers, Andrej Karpathy, a Stanford doctoral candidate who interned at Google and DeepMind, says:

OpenAI... encourages us to publish, to engage the public and academia, to Tweet, to blog. .... If something like [CRISPR which has great potential for benefiting — and hurting — humankind. Because of these ethical issues there was a recent conference on it in DC to discuss how we should go forward with it as a society] happens in AI during the course of OpenAI’s research — well, we’d have to talk about it. We are not obligated to share everything —to talk about it. We are not obligated to share everything —  in that sense the name of the company is a misnomer —  but the spirit of the company is that we do by default.

As another example, Cade Metz, in “Google Just Open Sourced TensorFlow, Its Artificial Intelligence Engine,” notes

To be sure, Google isn’t giving away all its secrets. At the moment, the company is only open sourcing part of this AI engine. It’s sharing only some of the algorithms that run atop the engine. And it’s not sharing access to the remarkably advanced hardware infrastructure that drives this engine (that would certainly come with a price tag).

It then follows that the big breakthroughs will most likely come from the larger players in the AI race, i.e., those with joint control of all three sets of components.

Feedback Loops => Technology Adoption

Another aspect of the feedback effects associated with advances in AI technologies is the increasing rate of adoption of technologies embodying the improved AI. As AI improves, associated technologies “just work better,” so more people use them. As Dave Gershgorn puts it in “How Google Aims to Dominate Artificial Intelligence,”

…[I]ncreased adoption is because the feature just works better now.

And as more people use them, the technologies get better, and so more people use them, and so on.

As we saw earlier, Charles Clover noted the hugely important impact that even marginal improvements in technology can have on increasing the rate of adoption.

“A lot of people underestimate the difference between 95 per cent and 99 per cent accuracy. It’s not an ‘incremental’ improvement of 4 per cent; it’s the difference between using it occasionally versus using it all the time”…

Feedback Loops => First Mover Advantage

Perhaps one of the most important aspects of feedback effects is that they create a first mover advantage for the first entity that manages to achieve artificial general intelligence (AGI). This is yet another possible explanation for the recent frenzy to hire elite AI researchers. Kevin Kelly explains how this phenomenon works in “The Three Breakthroughs That Have Finally Unleashed AI on the World”:

The more people who use an AI, the smarter it gets. The smarter it gets, the more people use it. The more people that use it, the smarter it gets. Once a company enters this virtuous cycle, it tends to grow so big, so fast, that it overwhelms any upstart competitors. As a result, our AI future is likely to be ruled by an oligarchy of two or three large, general-purpose cloud- based commercial intelligences.

Feedback Loops and/or Open Source => Network Effects

This subsection straddles both this section on Feedback Loops as well as the next section on Open Source Systems. Both feedback effects and open source systems exhibit network effects, both direct and indirect.

Network effects exist when the value of being a member of a network increases with the size of the network. Direct network effects are a form of network effects in which members of the network gain value from interacting or sharing resources with other members of the network. Indirect network effects are a form of network effects in which availability of complementary products and services creates value.  Since larger networks attract greater supplies of complementary products and services, they indirectly provide users with more value.

Since feedback effects and open sourcing both lead technologies to improve in quality, feedback effects and open sourcing thus lead more users both to adopt the systems and to create complementary products and services for the systems. Of course, both users and originators of the technologies benefit from having larger communities of users and greater availability of complementary products and services.

 

Stated Benefits of Open Source Systems

The founders of OpenAI, together with the media have given several justifications for OpenAI being established as an open source organization, namely (i) a non-profit organization can pursue information solely for the benefit of humanity without having to worry about generating a profit, (ii) to make sure no one entity has to much of an advantage in AI advancements over the rest of the community, (iii) to enable the open source community to benefit from and improve the technology, and (iV) to attract experienced researchers to the OpenAI team.

Focus on Projects that Benefit Humanity

Elon Musk and Sam Altman have emphasized that one of their primary reasons for establishing OpenAI as a non-profit organization is so that it won’t be beholden to shareholders. By being freed from the burden of having to generate a profit, OpenAI researchers are free to pursue any projects that benefit humanity, regardless of whether or not they will be able to generate financial value.

Mitigate Power of Single Entity

One of the stated reasons for making OpenAI open source is to make sure no one entity has too much power relative to everyone else. Elon Musk and Sam Altman state this directly in an interview. From “How Elon Musk and Y Combinator Plan to Stop Computers From Taking Over”:

We think the best way AI can develop is if it’s about individual empowerment and making humans better, and made freely available to everyone, not a single entity that is a million times more powerful than any human.



… [W]e think its far more likely that many, many AIs, will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else.

Benefit From and Improve the Technology

Another stated reason for making OpenAI open source is that open source community will not only benefit from having access to the technology, but they will improve the technology, perhaps even more quickly than the originating company could have done. Such (rapid) technology improvements will benefit the whole source community generally, but they will also feedback and benefit the original creators of the open source technology who are also using that technology. In other words, the open source creators benefit from having a massive pool of non-company personnel freely contribute to improving the technology. The pace of technology improvement is especially important in an atmosphere that is both highly competitive and benefits from feedback/network effects. Cade Metz iterates this concept as it relates to Google open sourcing TensorFlow and Facebook open sourcing Torch and Big Sur. “Facebook Open Sources Its AI Hardware As It Races Google”

But in the short term, Google is merely interested sharing the code. As Dean says, this will help the company improve this code. But at the same time, says Monga, it will also help improve machine learning as a whole, breeding all sorts of new ideas. And, well, these too will find their way back into Google. “Any advances in machine learning,” he says, “will be advances for us as well.”



It may seem odd that these companies [Google and Facebook] are giving away their technology. But they believe this will accelerate their work and foster new breakthroughs. If they open source their hardware and software tools, a larger community of companies and researchers can help improve them. “There is a network effect. The platform becomes better as more people use it,” says Yann LeCun, a founding father of deep learning, who now oversees AI work at Facebook. “The more people that rally to a particular platform or standard, the better it becomes—the platform or standard, the better it becomes—the more people contribute.”

Even after open sourcing a technology, however, the creators can still retain a competitive edge over the open source community by keeping private certain parts of the technology – i.e., the most novel, cutting edge, or otherwise strategically valuable. See the subsection above, “Feedback Loops => Component Synergies” for more detail on this point.

Attract Elite Researchers

Perhaps one of the most important reasons for making OpenAI open source, however, is so that organization will be able to attract talent from the scarce pool of elite AI researchers. Many of the top researchers in AI originated in academia and they particularly value the ability to freely publish and discuss their advances with the public. Altan states this explicitly. More from “How Elon Musk and Y Combinator Plan to Stop Computers From Taking Over”

You will be competing for the best scientists now who might go to Deep Mind or Facebook or Microsoft?

Altman: Our recruiting is going pretty well so far. One thing that really appeals to researchers is freedom and openness and the ability to share what they’re working on, which at any of the industrial labs you don’t have to the same degree. We were able to attract such a high-quality initial team that other people now want to join just to work with that team. And then finally I think our mission and our vision and our structure really appeals to people.

Cade Metz indicates that Facebook has voiced this same rationale for making its Torch and Big Sur technologies open source.

Plus, Facebook can curry favor across the community, providing added leverage in recruiting and retaining talent. “Our commitment to open source is something that individuals who work here are passionate about,” says Serkan Piantino, an engineering director in Facebook’s AI group. “Having that be a part of our culture is a benefit when it comes to hiring.”



But according to LeCun, there are bigger reasons for open sourcing Big Sur and other hardware designs. For one thing, this can help reduce the cost of the machines. If more companies start using the designs, manufacturers can build the machines at a lower cost. And in a larger sense, if more companies use the designs to do more AI work, it helps accelerate the evolution of deep learning as a whole—including software as well as hardware.

One of the best explanations as to why people contribute to open source projects when they don’t receive any compensation for doing so was provided in a paper written by Yochai Benkler called “Coase’s Penguin, or, Linux and the Nature of the Firm.”

Yochai Benkler notes that from the researchers’ side, open source projects are a means for researchers to generate public reputation for themselves. Many researchers are not able to establish public reputations because they work on confidential projects. In these cases, the researchers may benefit from establishing a reputation publically by showing off their talent out in the open.

Yochai Benkler also notes that many researchers contribute to open source projects as a means of learning from others and honing their skills.

 

Why Do I Think OpenAI Was Established As Open Source?

The More Obvious/Discussed Justifications

OpenAI as Open Source Is Consistent with Musk’s Stated Beliefs

Establishing OpenAI as an open source entity is consistent with Elon Musk’s participation in the Future of Life Institute and previous stated concerns about AI being humanity’s “biggest existential threat” (see, for example, Miriam Kramer, “Elon Musk: Artificial Intelligence Is Humanity's 'Biggest Existential Threat'”). A non-profit organization would, indeed, enable Musk to pursue projects that benefit humanity (free and open to all) without having to worry about satisfying profit-seeking shareholders.

OpenAI as Open Source Is Consistent with Musk’s Public Image

Establishing OpenAI as an open source entity is consistent with Elon Musk’s public image as a visionary and a humanitarian. As Andrea Peterson puts it in “Even Elon Musk knows he’s a good supervillain candidate”:

Many of his interests seem aimed at setting himself up as some sort of technical messiah – a man who wants to develop electronic cars to help save the planet, a man who is willing to help move humanity to Mars if he fails, and a man who is already worried about Hal-like artificial intelligence even before that space odyssey.

Granted, many do believe that his public persona masks a darker underside. More from Andrea Peterson:

But some also worry there might be a dark side to his obsessions. What if his altruism is actually a masking a supervillain in training?

Musk is, after all, a ludicrously wealthy entrepreneur in the vein of Superman nemesis Lex Luthor. And his interests have a curious overlap with a number of Bond villains…

I'm far from the first person to make the connection: More ambitious stories have been written, and a Google search for "Elon Musk supervillain" currently returns more than 10,000 results -- including a Web site specifically dedicated to the theory.

OpenAI as Open Source Facilitates Recruitment of Elite Researchers

Let’s consider the timeline of activities.

4-30-14: Precious Silva, “Facebook vs Google: Race to Build the Next Artificial Intelligence System”

According to a report by Top Tech News, Google, Facebook and similar large companies are looking for and hiring scientists related to artificial intelligence. The companies appear prepared to invest considerably on development of the technology.

"It's important to position yourself in this market for the next decade," said Yann LeCunn - LeCunn is a recognized New York University researcher overseeing the A.I. division of Facebook.

1-20-15: Facebook open sources Torch (AI Engine)

11-9-15: Google open sources TensorFlow (AI Engine)

12-10-15: Facebook open sources Big Sur (Platform)

12-11-15: OpenAI launches (AI Engine)

Given the timeline of activities, it seems reasonable to infer that OpenAI came to the table relatively late to attract top talent in AI research. If Musk and Altman had wanted to hire elite researchers for their team, it is likely that they would have had to lure such talent away from other activities. As I indicated earlier, these researchers are particularly attracted to open source projects. Also, as the timeline indicates, Google and Facebook had already open sourced their AI engines. It seems reasonable to assume, then, that making OpenAI open source was one of the best means for Musk and Altman to attract top researchers to join their team.

The Less Obvious/Discussed Justification

There is another justification for Elon Musk making OpenAI an open source organization that has not been so widely discussed: Elon Musk will personally benefit from designating OpenAI as open source.

According to Wikipedia, Elon Musk is

…[A]n American aerospace manufacturer and space transport services company … founded in 2002 … with the goal of creating the technologies to reduce space transportation costs and enable the colonization of Mars.

There are clear applications of AI technology to space travel. For example, Steve Chien and Robert Morris report in “Space Applications of Artificial Intelligence”

As many space agencies around the world design and deploy missions, it is apparent that there is a need for intelligent, exploring systems that can make decisions on their own in remote, potentially hostile environments. At the same time, the monetary cost of operating missions, combined with the growing complexity of the instruments and vehicles being deployed, make it apparent that substantial improvements can be made by the judicious use of automation in mission operations.

**********

The case for increasing the level of autonomy and automation for space exploration is well known. Stringent communications constraints are present, including limited communication windows, long communication latencies, and limited bandwidth. Additionally, limited access and availability of operators, limited crew availability, system complexity, and many other factors often preclude direct human oversight of many functions. In fact, it can be said that almost all spacecraft require some level of autonomy, if only as a backup when communications with humans are not available or fail for some reason.

Increasing the levels of autonomy and automation using techniques from artificial intelligence allows for a wider variety of space missions and also frees humans to focus on tasks for which they are better suited. In some cases autonomy and automation are critical to the success of the mission. For example, deep space exploration may require more autonomy in the spacecraft, as communication with ground operators is sufficiently infrequent to preclude continuous human monitoring for potentially hazardous situations.

… [D]esigns, manufactures, and sells luxury electric cars, electric vehicle powertrain components, and battery products… Future vehicles may further advance autonomous driving features.”

Elon Musk has publicly announced plans to incorporate autonomous driving into his Tesla automobiles. From M4tt, “Elon Musk in talks with Google to bring driverless tech to Tesla cars (update)”

Tesla CEO Elon Musk has revealed he has been in talks with Google to bring driverless technology to its vehicles. According to Bloomberg, Musk sees autonomous driving becoming the next logical step in the evolution of cars, but believes Google's technology — which currently utilizes sensors over an optical system — is "too expensive," adding that Tesla is "not focused on autopilot right now [but] we will be in the future."

Admitting that Tesla has engaged in technical discussions with Google, Musk also told Bloomberg that Tesla will likely develop its own autopilot system, which could incorporate a more cost-effective camera-based alternative that uses software to detect and position a vehicle.

… [A]n American provider of energy services... Among its primary services, the company designs, finances, and installs solar power systems… SolarCity has grown to meet the rapidly growing installation of solar photovoltaic systems in the United States. … SolarCity has diversified in 2014 and 2015, with the aim of lowering costs and boosting sales.”

There are many applications of AI technology to solar power systems. For example, Soteris A. Kalogirou and Arzu Şencan, in “Artificial Intelligence Techniques in Solar Energy Applications” indicate:

Artificial intelligence techniques have been used by various researchers in solar energy applications. This section deals with an overview of these applications. Some examples on the use of AI techniques in the solar energy applications are summarized in Table 1.

… [A] non-profit artificial intelligence (AI) research company … that aims to carefully promote and develop open-source friendly AI in such a way as to benefit, rather than harm, humanity as a whole.

I’ve argued that making OpenAI open source facilitated Elon Musk’s ability to lure top AI researchers away from other activities and join the team of researchers at OpenAI.

I’ve argued that feedback loops serve to amplify the benefits from (i) component synergies, (ii) technology adoption, (iii) first-mover advantage, and (iv) network effects, all of which will be enhanced by having the best researchers on the OpenAI team.

There are clear applications of advances in AI technologies generated by OpenAI to Elon Musk’s other enterprises, SpaceX, Tesla Motors, and SolarCity.

Elon Musk’s role in OpenAI will give him the authority to influence which AI technology advances are released to the public. He will clearly have an incentive to keep private any technology advances that would particularly serve his interests in SpaceX, Tesla Motors, and/or SolarCity.

Regardless of whether or not specific advances are made public, as Co-Chairman of OpenAI, Elon Musk will have clear and unfettered direct access to all research. And by having all research generated under the auspices of open source, he will be able to take any new discoveries and freely use him in his other, private enterprises – SpaceX, Tesla Motors, and SolarCity – without having to worry about intellectual property (IP) infringement issues.

More Blogs

Cannabis Cultivation: Seeds vs. Clones

26-09-2020 - Hits:1763 - Ruth Fisher - avatar Ruth Fisher

Cannabis plants are dioecious, that is, they are either male or female. Plant reproduction occurs naturally, when male plants pollinate female plants, causing female plants to produce seeds. New cannabis plants can thus be cultivated by collecting seeds from fertilized females and replanting them, or by buying seeds generated by...

Read more

Cannabis Cultivation: Indoor vs. Outdoor vs. Greenhouse

22-09-2020 - Hits:1448 - Ruth Fisher - avatar Ruth Fisher

There are three basic locales for growing cannabis: indoors, outdoors, or in greenhouses. Greenhouses enable growers to benefit from natural light, while also being able to strategically block out light to induce quicker flowering. Budget-friendly greenhouse operations are more subject natural climate variations, while higher-end greenhouses are more similar to...

Read more

Would the Endocannabinoid System Have Been Discovered Earlier without the Ban on…

10-06-2020 - Hits:1588 - Ruth Fisher - avatar Ruth Fisher

Cannabis was used medicinally in the Western world from the mid-1800s through 1940, even though doctors did not understand cannabis’s mechanisms of action. The Marijuana Tax At of 1937 Federally banned the use of cannabis in the US for either medical or recreational uses, and it restricted scientific studies of...

Read more

How Regulations Shape the Cannabis Industry

16-05-2020 - Hits:2367 - Ruth Fisher - avatar Ruth Fisher

  The cannabis industry is highly regulated, and the various regulations play a powerful role in shaping the structure, and thus outcome, of the industry. This analysis examines the following questions: How do cannabis market regulations shape market structure? Are the resulting outcomes favorable to suppliers and/or consumers? What are the pros and cons...

Read more

Cannabis Industry Rollouts: Lessons Learned from States’ Experiences

27-04-2020 - Hits:1754 - Ruth Fisher - avatar Ruth Fisher

Bart Schaneman from MJ Business Daily recently released, “2020 Cultivation Snapshot: U.S. Wholesale Marijuana Prices & Supply.” The information contained in the report helped cement certain insights I’ve had about the evolution of the cannabis market. Background info In addition to the myriad other laws and regulations, all states essentially have two...

Read more

A Data-Generating System: A Framework for Data Assessment

14-04-2020 - Hits:1069 - Ruth Fisher - avatar Ruth Fisher

Suppose I gave you, the Data Analyst, a dataset of information on sales of Ford automobiles, and suppose I told you to use that dataset to predict total national sales of Ford automobiles for next 12 months. What would you want to know about the data you were given? If you...

Read more

Hemp and CBD Market Supply

06-04-2020 - Hits:1890 - Ruth Fisher - avatar Ruth Fisher

The information in this post was compiled in an attempt to understand 2 issues: Does the cultivation of hemp differ depending on the hemp product supplied (fiber, seed, or flower)? Is the CBD produced from hemp (cannabis with ≤ 0.3% THC) identical to the CBD produced from marijuana (cannabis with > 0.3%...

Read more

Trends in Cannabis Patents Over Time

08-12-2019 - Hits:2353 - Ruth Fisher - avatar Ruth Fisher

Patent Counts by Year I searched the USPTO patent database for all patents for which the patent abstract contained any of the following terms: cannabis, cannabinoid, marijuana, tetrahydrocannabinoid, or cannabinol. My search yielded 914 patents. As seen in Figure 1, there were only a handful of cannabis patents each year until the...

Read more