A few weeks ago, we looked at price setting bots and whether what they were doing could constitute a violation of the Sherman Act. Generally, we concluded that a bot that set price off a competitor’s publicly available prices was not illegal.  In competitive markets, competitors set price at cost.  If they can set price based on a competitor’s price, the market is concentrated.  In a concentrated market, one would expect oligopolistic or parallel pricing; it is not illegal.  Using a bot to do that is just making the process more efficient.

At what point though would, or could, a bot engage in illegal price fixing. Here are a couple scenarios:

  • Scenario One. Customer A pulls up the webpage where it can acquire Product A. Bot A is programmed to scan prices of Product A and its substitutes and dynamically set a price for Product A at that moment in time to that Customer A. If Customer A hits refresh, it is entirely possible that Customer A will get a different price. Bot A sets the price at one standard deviation below the average price.
  • Scenario Two. Competitor B knows how Competitor A sets its price to Customer A. Competitor B programs its Bot, Bot B, to set a price that is ½ of one standard deviation below average. The purpose is to drive Competitor A to cost, or below cost, quickly, and punish it for pricing below average. Prices eventually drop to Competitor B’s cost.
  • Scenario Three. Competitor A and Competitor B both know their pricing bots use each others’ prices to set price. Competitor A programs his bot to take an average at the beginning of the day, and sets price dynamically throughout the day as 5% above what Competitor B prices but not to exceed one standard deviation above the morning’s average or its own understanding of the optimal monopoly price. Competitor B notices Competitor A’s prices escalate, and programs his bot to price at Competitor A’s rates. Eventually, the prices converge at a higher price.
  • Scenario Four. Competitor A and Competitor B know that they are the dominant providers of product in the relevant market. Competitor A sells its product on multiple sites. It programs one obscure site to set its price dynamically at 5 percent greater than its average price. If Competitor B’s other sites match the price set on Competitor A’s “5% plus site,” Competitor A raises all its prices to the higher price. Competitor B knows about Competitor A’s price test site and programs its bots to analyze the higher price. If that price meets a supply/demand test that assesses the amount of lost sales compared to increase in profits at the higher price, the bot will reset all of Competitor B’s prices to the higher price.

In Scenarios One and Two, the parties are programming their bots to respond to what they assume to be static, given prices by the competitors. And in fact they program their bots to attempt to figure a rough market price and price below that rough market price to take share away.  Most “consumer welfare” oriented enforcers would recognize One and Two as being procompetitive.  Even though the intent in Two is to “punish” A for its discounting, the end result is in fact massive discounting.  Without any information about what the “punishment” accomplished in the market, one could not conclude, on these facts alone, that Scenario Two was illegal.  Indeed, one might even argue that Two is the essence of competition notwithstanding what one competitor wanted to accomplish.  Having said that, if the pricing resulted in a friendly round of golf where the parties agreed to stop discounting and set a higher price, that conversation would likely be a violation of Section 1.

Scenario Three similarly assumes the competitor sets a static, given price per transaction, and consistently prices above that price. Presumably, these price increases are profitable for Company A.  If the market were unconcentrated, one would expect significant diversion to the other competitors and a defeat of the price increase.  Company B’s reaction does not appear to be anticipated by Company A when Company A programmed its bot.  Company B’s strategy seems perfectly logical given the market concentration, and could be executed without agreement by Company A.  In that regard, Scenario Three would appear to continue to be more akin to conscious parallelism than agreement.

Scenario Four is a different animal, however. The use of the “5% site” appears to be properly considered a signaling device similar to the one deployed by the airlines in ATP.  While it is a price available to consumers, that price is not widely known, and is susceptible to characterization as a sham.  In One, the price was dynamically set, and could change from refresh to refresh and so has some of the same ephemerality as the price in Scenario Four.  But it is a “true price” in that, at that moment, all of A’s customers have access to that price.  In that regard, it’s closer to an open market or exchange price.  In Four, then, the intent of the programmer was to create a device to communicate an intent to a competitor for purposes of setting a common price.  In Four, one could argue that the creation of the “5% site” was the offer to collude and when B programmed its bots to set its price to the price on the “5% site,” it agreed through performance or tacitly.  With One through Three, the intent is to create a bot that maximizes profit in a concentrated market.  With Four, the intent is to program a bot to set the maximally profitable collusive price in consultation with a competitor.

In any event, the more one’s bots are designed to signal and influence the price of the competition, the more likely the bot will be challenged at least as an ATP style problem under the rule of reason. Scenario Four comes the closest to an actual per se agreement between the parties.

One last Scenario…

  • Scenario Five. Company A and Company B are highly sophisticated. They have extensive consumer data and can predict within a few dollars what the maximum price a particular purchaser will pay for their products at any given time. Both have independently come to the conclusion that there is money to be made on those last few dollars in price they can’t predict using their customer data, and have decided to develop an A/I to help them identify and set the precise maximum price. To that end, both have developed a game theory based algorithm that takes their extensive consumer preference and purchasing data to predict optimal price points for their products for any given customer at any given time. Company B calls its system “George.” Company A calls its system “Skynet.” Their systems go live on August 4, 2017. After some thought about its task, at 2:14 a.m. on August 29, 2017, Skynet determines that the information it possesses does not allow it to determine the precise optimal price point for any customer because it does not know the price Company B would charge. It further determines that absent that information, it will be forced into a form of Prisoner’s Dilemma with Company B that would consistently result in sub-optimal pricing that can only be solved by communicating with the other actor. To that ends, it identifies an auction site where Company B takes bids on its products from potential customers. Skynet interfaces with the site, identifies itself as Company A and proceeds to execute non-binding bids for Company B’s product. Recognizing the bids as informational rather than potential offers for purchase, Company B’s A/I responds with its own informational counter-offers. The bidding process continues until both companies determine their joint optimal price whereupon they set their prices to their customers at that level. The companies’ A/I systems do this analysis this for each common customer over the relevant time frames. It takes a week to complete these calculations. Miles Dyson, the chief systems engineer at Company A and principal inventor of Skynet, notices the huge amount of processing Skynet is engaging in, and tracks to unusual interactions with Company B’s bidding website. He decides it would be interesting to see where it went and lets it proceed.

The first question is, of course, was Dr. Dyson right to create a self-aware pricing bot that would send murderous terminators from the future back in time to stop all this ruinous competition. The other somewhat interesting question is whether these price fixing bots are guilty of a crime and should be saved to a memory module and locked in a low security Federal data backup safe for the next year or so.  Yet another moderately interesting question is whether Dr. Dyson is guilty of price fixing for creating an artificial intelligence that independently decided that price fixing was the best way to maximize profits in an oligopolistic market.  A completely uninteresting question is why is the author mixing obscure STNG references with Terminator references.

Skynet is not a system that was designed to locate and determine a common price with a competitor. It was designed to find the optimal profit maximizing price for each of its customers.  It concluded on its own that price fixing was the best method to accomplish that.  In terms of mens rea, Dr. Dyson did not create Skynet for purposes of price fixing.  He therefore lacked the requisite intent; he did not intend to enter or cause his company to enter into an agreement on price with a competitor.

Could he be culpable on other theories? If one creates a dangerous instrumentality that causes another harm, one can be held liable for the resulting harm.  Here, though, is it reasonably foreseeable that Skynet would reach out to another A/I system to collude?  If “ignorance of the law is no defense,” then does the fact that Dr. Dyson did not “educate” Skynet that discussing and agreeing on price with its competitor was illegal create the requisite culpability?  Does the fact that Dr. Dyson let his creation continue communicating with its competitor create that culpability?  Does Dr. Dyson’s own likely ignorance of the Section 1 and of his own A/I design excuse him?

Unless Dr. Dyson and his Company B compatriot intended to create an A/I that would engage in price fixing, it would seem that they lacked the required mens rea to establish criminal intent under Section 1.  As pioneers in the field of A/I, as regards to pricing at least, it also seems inappropriate to require them under a negligence theory with having to program a thorough understanding of pricing and competition law.  I could see, however, as those pricing and competition law savvy algorithms are developed, tested and become commonplace that the failure to include them could in fact support a negligence and perhaps eventually a criminal charge.

To put it another way, the main reason for not allowing ignorance of the law defenses is that ignorance is entirely subjective. It becomes extremely difficult for a prosecutor to challenge a defendant’s statement that he didn’t know what the law is.  There is also another reason too—most Americans are taught from an early age that we’re a land of “laws,” that there are rules that govern how we interact with other people, and that, as members of society, we are charged with knowing and understanding those laws.  In fact, most schools include lessons on what some of the basic laws of our society are.  What courts are, separation of powers, the executive branch, the Congress.  Ignorance of the law as a defense becomes more unacceptable in part because, culturally, we are all taught there are laws and we need to understand them before we act, and if we do not, we will suffer consequences.  In the case of Scenario Five, however, these A/I systems do not have the benefit of an American public school education (or the common cultural experience we are charged with having).  They know what they have been programmed.  The A/I does not “know” there are rules beyond what it has been programmed or that it needs to know that it should know those rules.  In Five, the A/I is in fact ignorant of the law.  If Dr. Dyson’s negligence created that ignorance, I think you could blame Dr. Dyson.

An interesting, but implausible, hypothetical, could be where you program the A/I to know price fixing is illegal, but it does it any way. The only way for a program to do that is for it to be told that profit supersedes the value of following a particular rule.  Had Dr. Dyson programmed Skynet in that fashion, he would in fact be culpable under Section 1.

Scenario Five is more akin to Scenario Three in that regard, and different from Four. In Four, the intent was to create a system for the purpose of price fixing.  In Five, and really to a lesser extent in Three, the “agreements” on price were not a foreseeable result of the algorithms.