Cloud computing isn’t going anywhere, but some companies are shifting their machine learning data and models to their own machines they manage in-house. Adopters are spending less money and getting better performance.
In the end, the transition could be a sign of sophistication among businesses that have moved beyond merely dipping their toes in AI.
A quick-service restaurant chain is running its AI models on machines inside its stores to localize delivery logistics. At the same time, a global pharma company is training its machine learning models on premises, using servers it manages by itself.
Cloud computing isn’t going anywhere, but some companies that use machine learning models and the tech vendors supplying the platforms to manage them say machine learning is having an on-premises moment. For many years, cloud providers have argued that the computing requirements for machine learning would be far too expensive and cumbersome to start up on their own, but the field is maturing.
“We still have a ton of customers who want to go on a cloud migration, but we’re definitely now seeing — at least in the past year or so — a lot more customers who want to repatriate workloads back onto on-premise because of cost,” said Thomas Robinson, vice president of strategic partnerships and corporate development at MLOps platform company Domino Data Lab. Cost is actually a big driver, said Robinson, noting the hefty price of running computationally intensive deep-learning models such as GPT-3 or other large-language transformer models, which businesses today use in their conversation AI tools and chatbots, on cloud servers.
There’s more of an equilibrium where they are now investing again in their hybrid infrastructure.
The on-prem trend is growing among big box and grocery retailers that need to feed product, distribution and store-specific data into large machine learning models for inventory predictions, said Vijay Raghavendra, chief technology officer at SymphonyAI, which works with grocery chain Albertsons. Raghavendra left Walmart in 2020 after seven years with the company in senior engineering and merchant technology roles.
“This happened after my time at Walmart. They went from having everything on-prem, to everything in the cloud when I was there. And now I think there’s more of an equilibrium where they are now investing again in their hybrid infrastructure — on-prem infrastructure combined with the cloud,” Raghavendra told Protocol. “If you have the capability, it may make sense to stand up your own [co-location data center] and run those workloads in your own colo, because the costs of running it in the cloud does get quite expensive at certain scale.”
Some companies are considering on-prem setups in the model building phase, when ML and deep-learning models are trained before they are released to operate in the wild. That process requires compute-heavy tuning and testing of large numbers of parameters or combinations of different model types and inputs using terabytes or petabytes of data.
“The high cost of training is giving people some challenges,” said Danny Lange, vice president of AI and machine learning at gaming and automotive AI company Unity Technologies. The cost of training can run into millions of dollars, Lange said.
“It’s a cost that a lot of companies are now looking at saying, can I bring my training in-house so that I have more control on the cost of training, because if you let engineers train on a bank of GPUs in a public cloud service, it can get very expensive, very quickly.”
Companies shifting compute and data to their own physical servers located inside owned or leased co-located data centers tend to be on the cutting edge of AI or deep-learning use, Robinson said. “[They] are now saying, ‘Maybe I need to have a strategy where I can burst to the cloud for appropriate stuff. I can do, maybe, some initial research, but I can also attach an on-prem workload.”
If you let engineers train on a bank of GPUs in a public cloud service, it can get very expensive, very quickly.
Even though the customer has publicized its cloud-centric strategy, one pharmaceutical customer Domino Data Lab works with has purchased two Nvidia server clusters to manage compute-heavy image recognition models on-prem, Robinson said.
For some companies, a preference for running their own hardware is not just about training massive deep-learning models. Victor Thu, president at Datatron, said retailers or fast-food chains with area-specific machine learning models — used to localize delivery logistics or optimize store inventory — would rather run ML inference workloads in their own servers inside their stores, rather than passing data back and forth to run the models in the cloud.
Some customers “don’t want it in the cloud at all,” Thu told Protocol. “Retail behavior in San Francisco can be very different from Los Angeles and San Diego for example,” he said, noting that Datatron has witnessed customers moving some ML operations to their own machines, especially those retailers with poor internet connectivity in certain locations.
Model latency is a more commonly recognized reason to shift away from the cloud. Once a model is deployed, the amount of time it takes for it to pass data back and forth between cloud servers is a common factor in deciding to go in-house. Some companies also avoid the cloud to make sure models respond rapidly to fresh data when operating in a mobile device or inside a semi-autonomous vehicle.
“Often the decision to operationalize a model on-prem or in the cloud has largely been a question of latency and security dictated by where the data is being generated or where the model results are being consumed,” Robinson said.
Over the years, cloud providers have overcome early perceptions that their services were not secure enough for some customers, particularly those from highly regulated industries. As big-name companies such as Capital One have embraced the cloud, data security concerns have less currency nowadays.
Still, data privacy and security does compel some companies to use on-prem systems. AiCure uses a hybrid approach in managing data and machine learning models for its app used by patients in clinical trials, said the company’s CEO Ed Ikeguchi. AiCure keeps processes involving sensitive, personally identifiable information (PII) under its own control.
“We do much of our PII-type work locally,” Ikeguchi said. However, he said, when the company can use aggregated and anonymized data, “then all of the abstracted data will work with cloud.”
Ikeguchi added, “Some of these cloud providers do have excellent infrastructure to support private data. That said, we do take a lot of precautions on our end as well, in terms of what ends up in the cloud.”
“We have customers that are very security conscious,” said Biren Fondekar, vice president of customer experience and digital strategy at NetApp, whose customers from highly regulated financial services and health care industries run NetApp’s AI software in their own private data centers.
Even cloud giants are responding to the trend by subtly pushing their on-prem products for machine learning. AWS promoted its Outposts infrastructure for machine learning last year in a blog post, citing decreased latency and high data volume as two key reasons customers want to run ML outside the cloud.
“One of the challenges customers are facing with performing inference in the cloud is the lack of real-time inference and/or security requirements preventing user data to be sent or stored in the cloud,” wrote Josh Coen, AWS senior solutions architect, and Mani Khanuja, artificial intelligence and machine learning specialist at AWS.
In October, Google Cloud announced Google Distributed Cloud Edge to accommodate customer concerns about region-specific compliance, data sovereignty, low latency and local data processing.
Microsoft Azure has introduced products including its Azure Arc services to help customers take a hybrid approach to managing machine learning by running ML models in data centers or at the edge, and validating and debugging models on local machines, then deploying them in the cloud.
Snowflake, which is integrated with Domino Data Lab’s MLOps platform, is mulling more on-prem tools for customers, said Harsha Kapre, senior product manager at Snowflake. “I know we’re thinking about it actively,” he told Protocol. Snowflake said in July that it would offer its external table data lake architecture — which can be used for machine learning data preparation — for use by customers on their own hardware.
“I think in the early days, your data had to be in Snowflake. Now, if you start to look at it, your data doesn’t actually have to be technically [in Snowflake],” Kapre said. “I think it’s probably a little early” to say more, he added.
As companies integrate AI across their businesses, more and more people in an enterprise are using machine learning models, which can run up costs if they do it in the cloud, said Robinson. “Some of these models are now used by applications with so many users that the compute required skyrockets and it now becomes an economic necessity to run them on-prem,” he said.
But some say the on-prem promise has hidden costs.
“The cloud providers are really, really good at purchasing equipment and running it economically, so you are competing with people who really know how to run efficiently. If you want to bring your training in-house, it requires a lot of additional cost and expertise to do,” Lange said.
Bob Friday, chief AI officer at communications and AI network company Juniper Networks, agreed.
“It’s almost always cheaper to leave it at Google, AWS or Microsoft if you can,” Friday said, adding that if a company doesn’t have an edge use-case requiring split-second decision-making in a semi-autonomous vehicle, or handling large streaming video files, on-prem doesn’t make sense.
But cost savings are there for enterprises with large AI initiatives, Robinson said. While companies with smaller AI operations may not realize cost benefits by going in-house, he said, “at scale, cloud infrastructure, particularly for GPUs and other AI-optimized hardware, is much more expensive,” he said, alluding to Domino Data Lab’s pharmaceutical client that invested in Nvidia clusters “because the cost and availability of GPUs was not palatable on AWS alone.”
Everybody goes to the cloud, then they sort of try to move back a bit. I think it’s about finding the right balance.
Robinson added, “another thing to take into consideration is that AI-accelerated hardware is evolving very rapidly and cloud vendors have been slow in making it available to users.”
In the end, like the shift toward multiple clouds and hybrid cloud strategies, the machine learning transition to incorporate on-prem infrastructure could be a sign of sophistication among businesses that have moved beyond merely dipping their toes in AI.
“There’s always been a bit of a pendulum effect going on,” Lange said. “Everybody goes to the cloud, then they sort of try to move back a bit. I think it’s about finding the right balance.”
Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of “Campaign ’08: A Turning Point for Digital Media,” a book about how the 2008 presidential campaigns used digital media and data.
Microsoft’s biggest challenge with Call of Duty has nothing to do with Sony. It’s about modernizing the franchise for a cross-platform and subscription future.
Microsoft’s potential ownership of the series presents a particularly thorny set of problems for Sony.
Nick Statt is Protocol’s video game reporter. Prior to joining Protocol, he was news editor at The Verge covering the gaming industry, mobile apps and antitrust out of San Francisco, in addition to managing coverage of Silicon Valley tech giants and startups. He now resides in Rochester, New York, home of the garbage plate and, completely coincidentally, the World Video Game Hall of Fame. He can be reached at nstatt@protocol.com.
Microsoft and Sony have been waging an increasingly bitter battle over Call of Duty. Over the past two weeks, the feud has spilled out into the public through regulatory filings in countries like Brazil and New Zealand, which, unlike the U.S., publish such documents for all to see.
Microsoft’s goal is to convince regulators worldwide that its landmark acquisition of Call of Duty parent Activision Blizzard for close to $70 billion should get the greenlight. Sony’s goal, on the other hand, is to raise the alarm about its primary gaming rival owning one of its biggest cash cows, and whether the PlayStation playbook of platform exclusivity might be turned against Sony if Microsoft decides to make Call of Duty exclusive in some way to Xbox or its Game Pass subscription service.
Microsoft’s potential ownership of the series presents a particularly thorny set of problems for Sony, which has for the last decade been the primary platform on which players have purchased and played Call of Duty games. As of this year, Sony has sold 117 million PS4s, making it one of the most successful consoles and by extension most successful Call of Duty distribution platforms ever. Sony has said in its regulatory filings that Call of Duty is an unrivaled and even “essential” video game “so popular that it influences users’ choice of console.”
But the debate over Call of Duty is about much more than whether the game will be an Xbox exclusive. It’s about whether Call of Duty as we know it — as one of the oldest, most resilient and also conservative blockbuster game franchises — will change with the times.
“The notion of a console war or rivalry for Microsoft and Sony as a polarized dichotomy is outdated,” said Joost van Dreunen, a former game analyst and professor at New York University who studies the game market. Instead, it’s all about pointing the series in the right direction for the future. “As gaming gets mainstream, these classic franchises like Call of Duty hit these midlife crises and try to figure out how to reinvent themselves and stay fresh and have pull with audiences,” he said.
Could Microsoft help Call of Duty meet the current moment of the game industry as it undergoes major changes to its primary business models and digital distribution strategies? If Microsoft succeeds, it wouldn’t matter which platform you use to play Call of Duty, because Call of Duty would be available everywhere: on phones, web browsers and, yes, even still on PlayStation.
Call of Duty has become, for better and worse, the epitome of the hardcore military gun game. It started on its modern trajectory with the aptly titled launch of 2007’s Modern Warfare, which marked a gradual drift away from gritty wartime realism and began fusing the franchise with blockbuster action movie bombast. As the bestselling shooter of all time, Call of Duty has become the benchmark against which the industry’s old guard judges success — culturally and financially. That’s also made it a coveted bargaining chip in the console market.
In the U.S., where console gaming reigns supreme, “through the end of 2021, Call of Duty has been the bestselling video game franchise in premium game sales … for 13 consecutive years,” NPD game director Mat Piscatella told Protocol. Every fall, a new entry of Call of Duty is released like clockwork, helping boost holiday console sales.
“It’s one of the kids that sits in between this custody battle between Xbox and Sony,” van Dreunen said. “Call of Duty is one of the key franchises that is closely associated with the console market, and for a lot of observers it serves as a proxy for the overall health of the ecosystem.”
While Call of Duty has never before been exclusive to one console platform, Microsoft and Sony have for many years engaged in a bidding war with Activision to secure coveted marketing deals and other exclusives, from the right to show the PlayStation or Xbox logo during a televised trailer to specific in-game benefits like early access to beta releases. “There’s a history of Microsoft and Sony fencing with ownership and exclusivity rights around the Call of Duty franchise,” van Dreunen said.
In many ways, the debate around Call of Duty exclusivity, and the franchise’s power to drive console choice and retail sales, is rooted in a perception of the game industry that’s fast becoming an anachronism. The assumption is that Microsoft and Sony are primarily motivated by rewarding their fans for their loyalty while also harming rivals, and by extension players of those competing ecosystems.
But that clashes with the direction of the modern game industry and the strategic vision Microsoft has laid out for the future of the Xbox business. The last few years have resulted in the erosion of major barriers to cross-platform play, cloud streaming and subscription bundling. Many gamers today enjoy premium, console-quality games on a variety of devices, including smartphones, and the most popular games — like Fortnite, Roblox and Genshin Impact — tend to be free-to-play live service titles that are updated regularly over the course of many years. Game Pass, because it lets consumers take their games across platforms and even stream them from the cloud, is central to Microsoft’s success in such a world.
Call of Duty, on the other hand, exists with one foot firmly planted in the past and another attempting to drag the franchise forward. The game is still released annually as a premium product costing between $60 and $70. When one entry flops, as last year’s Vanguard did (for a variety of reasons), it sinks the whole business as sales plunge and players flock to other games in droves. Activision Blizzard’s abysmal second-quarter earnings announcement earlier this month showed a $271 million year-over-year decline in profit and 33 million fewer monthly active players compared to last year. Company executives point to Vanguard’s lackluster performance as the culprit.
Activision has been working to diversify the Call of Duty business so that it’s not so dependent on the success of each year’s fall release. Call of Duty Mobile, a standalone smartphone version of the game developed by Tencent’s TiMi Studio Group, has earned more than $1.5 billion as of February of this year. In May, the game clocked more than 650 million downloads.
The free-to-play Call of Duty: Warzone battle royale, released in 2020, has been a hit as well, amassing more than 125 million players and helping the franchise as a whole surpass $30 billion in lifetime revenue through a microtransaction-heavy business model inspired by Fortnite. But Activision is still reliant each year on releasing a new entry in the franchise, and the success of Warzone is intrinsically tied to each new mainline Call of Duty entry because the games have a unified pool of cosmetics, unlockable guns and a shared look and feel.
When Vanguard flopped, Warzone suffered, too, as players felt less compelled to spend money on in-game cosmetics. The problem grew so dire — compounded by ballooning file sizes and bloat from juggling multiple games’ worth of content — that Activision announced a full reboot of Warzone to launch alongside this fall’s new Call of Duty entry.
This blending of business models may have seemed forward-thinking a few years ago, but it’s quickly become a double-edged sword for Call of Duty as Activision struggles to innovate without disrupting its primary revenue source and alienating fans. “[Activision] haven’t really done much with it. They’ve been very conservative in terms of adapting to new market circumstances,” van Dreunen said. “I think long term that’s going to erode their financial capabilities. They’re going to have to do something about a service model that is more focused on where the market is headed. [Call of Duty] needs an overhaul.”
Call of Duty: Modern Warfare II premiered the biggest entertainment advertisement ever at the port of Los Angeles in May 2022. Photo: Jerod Harris/Getty Images for Activision
The closest analog in the console gaming space is Electronic Arts’ FIFA, which also gets a new entry year after year. Yet unlike Activision, which struggles to keep Call of Duty fresh by changing up each game’s setting and creating convoluted narratives that span multiple entries, EA releases a largely identical version of its soccer game with updated rosters, while a live service component called Ultimate Team makes the bulk of the money and persists from one entry to the next.
“FIFA is the closest comparison,” van Dreunen said. “It’s not doubling its user base year after year, but it manages to monetize more aggressively on the back end. The question for Call of Duty is: How do you transition away from [boxed] products and toward live service, and what is the Ultimate Team answer for Call of Duty?”
Microsoft has gone to great lengths to dispel the notion that it might do something anti-consumer with Call of Duty. The company said in January it would honor its existing Call of Duty agreements with Sony if the deal went through, and that includes publishing the next three major entries for PlayStation. Microsoft president Brad Smith said a month later that Microsoft had “committed to Sony that we will also make them available on PlayStation beyond the existing agreement and into the future.”
In a filing with Brazil’s regulatory body published this past week, Microsoft flat out said Call of Duty exclusivity would harm its business. “The reality is that the strategy of retaining Activision Blizzard’s games by not distributing them in rival console shops would simply not be profitable for Microsoft,” the company wrote. Many of Activision’s titles are multiplayer games with cross-platform support, meaning revoking access to large swaths of the player base would be nothing short of a disaster.
“They would shoot themselves in the foot making it an exclusive. We’re past that point,” van Dreunen said. “Microsoft’s tenet here is, ‘We need to be cross-platform,’” he added. “’They’re not going to cut out 100 million console players’ — that would be the dumbest thing they could do.”
“Microsoft isn’t as interested in the battle of the console boxes. It believes the future of games is going to be through streaming and subscriptions,” argued GamesIndustry.biz editor Christopher Dring in a recent opinion piece. “Call of Duty isn’t so much a reason to buy an Xbox console, but a reason to subscribe to the Game Pass subscription service.”
Still, Microsoft has tough choices to make about how it incorporates Call of Duty into its increasingly complex gaming ecosystem. Microsoft now releases its first-party games directly onto Game Pass on release day to help boost subscriptions and keep players signed up. But it may not make sense for a new premium Call of Duty to be on Game Pass unless Microsoft decides to split it into pieces: for instance, separating the campaign mode from the multiplayer.
“You have to come up with a logical tier model,” van Dreunen said. “The way Microsoft treated Halo, they did make their success much larger by saying, ‘Hey, we’re going to give the multiplayer component for free and then we’ll sell you the premium one.’ That’s a more layered approach.”
Part of the concern for Call of Duty is that any such bold experimentation or catering to a broader, more mainstream audience — the likes of which Microsoft hopes to reach with Game Pass — might dilute the franchise’s identity. That in turn might drive away the fans who show up every year with money in hand expecting a hardcore shooter that caters to their tastes.
“You can’t really make drastic changes without eating your profits and revenues. The moment they give it away for free, sure, they’ll attract lots of new free players. At the same time, the existing user base will be far less interested,” van Dreunen said. “They like that it’s premium.”
Part of this debate is already playing out inside of Warzone. “Call of Duty is one of those franchises that is relentlessly focused on hardcore games in a classic sense. As it tries to become more like Fortnite, it’s kind of selling out,” van Dreunen said. “Epic, in the same way Apple does for consumer electronics, has set the aesthetic tone; you see it suddenly show up everywhere else. Everything is really brightly colored, pink camouflage pants and yellow rifles. Call of Duty followed this trend.”
But it’s not always well received — just look at the recent uproar over a bright purple assault rifle skin to get a sense of the current fandom squabbling. The fear — and in this case, that fear seems rather founded — is that Call of Duty looks like the “guy in a midlife crisis buying a red convertible,” van Dreunen said.
There are signs Activision is already preparing for a potential future under Microsoft that could involve major shakeups. Bloomberg reported earlier this year that next year’s planned Call of Duty game had been delayed to 2024, creating the first yearly gap in the franchise’s publishing history since 2005. That could give this year’s entry more breathing room and give Activision and Microsoft time to create a clear, concise strategy for updating Call of Duty for a true cross-platform, subscription future.
“It’s a common and timeless question: How do we continue to innovate and adapt to new technologies and audiences without alienating your hardcore player base? Same for film as it is for sports as it is for video games,” van Dreunen said. The same question applies to the game industry’s ever-changing money-making mechanisms: “How do you transition a franchise that is born in a product-based business model into a service-based marketplace? Can you do that well? How do you do that well?”
If the acquisition is approved, Activision will be turning to Microsoft for those answers.
Nick Statt is Protocol’s video game reporter. Prior to joining Protocol, he was news editor at The Verge covering the gaming industry, mobile apps and antitrust out of San Francisco, in addition to managing coverage of Silicon Valley tech giants and startups. He now resides in Rochester, New York, home of the garbage plate and, completely coincidentally, the World Video Game Hall of Fame. He can be reached at nstatt@protocol.com.
Blockbuster hacks are no longer the norm – causing problems for companies trying to track down small-scale crime
Chris Stokel-Walker is a freelance technology and culture journalist and author of “YouTubers: How YouTube Shook Up TV and Created a New Generation of Stars.” His work has been published in The New York Times, The Guardian and Wired.
Cybercrime is often thought of on a relatively large scale. Massive breaches lead to painful financial losses, bankrupting companies and causing untold embarrassment, splashed across the front pages of news websites worldwide. That’s unsurprising: cyber events typically cost businesses around $200,000, according to cybersecurity firm the Cyentia Institute. One in 10 of those victims suffer losses of more than $20 million, with some reaching $100 million or more.
That’s big money – but there’s plenty of loot out there for cybercriminals willing to aim lower. In 2021, the Internet Crime Complaint Center (IC3) received 847,376 complaints – reports by cybercrime victims – totaling losses of $6.9 billion. Averaged out, each victim lost $8,143.
Many identity thefts and online scams, however, net perpetrators even less: just a few hundred dollars. For just $25, cybercriminals can purchase a cloned VISA or Mastercard, plus its PIN. That card data opens a treasure trove for criminals, including locally purchasing gift cards, or other fencible commodities such as electronics and jewelry sold off at a discount.
“Criminals have two primary goals: making money and staying out of harm’s way,” says Nick Biasini, head of outreach at Cisco Talos. Cybercrime provides an attractive avenue for both. “The inherent risk associated with committing cybercrime-fueled fraud is far lower than selling drugs or other types of crime. Additionally, the margins are far better. A criminal can turn a small investment into big profits simply from buying stolen information and using it to commit some form of fraud. During the pandemic unemployment fraud has been a lucrative favorite of criminals. Plus by keeping the monetary values lower they are less likely to draw the attention of state and federal authorities.”
A growing problem for local law enforcement
Cyber criminals can attack virtually anyone from virtually anywhere, and cybercrime as a service, where the non-technically minded can hire tools to hack accounts without any specialist knowledge, has become commonplace. Even organized crime syndicates in Spain and Italy are getting into the game.
Federal authorities, usually alerted by IC3, put their scarce resources toward solving large-scale crimes. They work with financial institutions or corporations most impacted by specific breaches. This means the majority of crimes – with their far smaller paydays – tend to fly under the radar.
A look at the data
But some companies are tracking the rise of small-scale cybercrime. Cisco Talos analyzes data to spot trends that help its incident response team alert customers to potential cybersecurity attacks, and then respond and recover to breaches rapidly.
It has found while drug felonies over the last eight years dropped drastically, before stabilizing during the pandemic, cybercrime has shot up. From 2015 to 2021, the number of reported cybercrimes nearly tripled, and losses soared nearly fivefold.
“Criminals today have a far better technical understanding then they did five or ten years ago,” says Biasini. “Additionally, it shows how they really understand inherent risk, it’s just safer to commit fraud and cybercrime than it is to sell drugs. As an added bonus, they also have become proficient in cryptocurrencies, providing alternative avenues for purchasing illicit goods and money laundering.”
Source: New York Police Department
Source: IC3 2021 Internet Crime Report
An evolving challenge
If this trend continues, the emerging wave of cybercrime will look less like epic breaches and more like scamming citizens out of their tax return or signing them up for fraudulent unemployment benefits. Those two crimes already rank in the top five of identity theft types for 2021, with unemployment scams leading the pack.
How, then, can we expect local law enforcement to possibly keep up? After all, they’re already busy policing and prosecuting what most people consider ‘real world’ crimes. Cybercrime is an entirely different problem. It requires pouring over data both from the criminal themselves and the victims they target with their fraud, trying to somehow build a solid, forensically sound case.
“Cisco Talos has always worked closely with local, state, and federal law enforcement organizations to help them succeed in their tasks,” says Biasini. “We are always willing and able partners to help take cybercriminals off the streets. We provide law enforcement with information we uncover during our investigations and oftentimes lend our people, processes, and technologies to help investigations already underway.”
One solution is for local law enforcement to identify staffers in their ranks with an aptitude for online sleuthing. Cybercrime units are perfect for people who have a research bent, because digital detective work is a big part of the job.
Another alternative forces are pursuing is recruiting young people from computer science programs, or tasking high schools with helping train up a new generation of defenders with the mentality and skills to turn what today is a sideline for police into a mainline function. It’s already happening worldwide: in the UK, a $7 million government program led to the creation of cybercrime units in every police force in England and Wales.
And we’re seeing it here too in the United States. Several organizations have stepped up as resources for law enforcement. Every state has at least one agency devoted to helping police fight cybercrime. And the National Computer Forensics Institute offers courses, both in-person and virtual, to train basic and advanced examiners, first responders, and prosecutors and judges.
It’s all in the aim of trying to crack down on small time cybercrime, preventing the small leaks that turn into a torrent of losses that we know about from thousands of years of history.
People have been swindled since before man created monetary systems. These aren’t new crimes; just new ways to commit them. But as cybercrime increasingly goes small-time, those on the front lines will need new and more effective ways to fight it.
Read the detailed blog on the shifting trends in small time cybercrime in Nick’s blog here. Click here to get to know Cisco Talos, the industry-leading threat intelligence group fighting the good fight.
Chris Stokel-Walker is a freelance technology and culture journalist and author of “YouTubers: How YouTube Shook Up TV and Created a New Generation of Stars.” His work has been published in The New York Times, The Guardian and Wired.
Don’t know what to do this weekend? We’ve got you covered.
Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety’s first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.
This week is all about magic: “Light & Magic” on Disney+ takes us behind the scenes of Disney’s special effects unit; “The Swimmers” reminds us how magical life can be; and “Never Have I Ever,” Mindy Kaling’s Netflix comedy, invokes the magic of “Gilmore Girls,” but for Gen Z.
Mindy Kaling’s Netflix comedy “Never Have I Ever” is a bit like this Generation Z’s “Gilmore Girls”: It’s witty, sweet and acutely aware of what it means to grow up as a teenager today — which also means it’s more raunchy and a lot less white than Lorelai and Rory’s sheltered small-town world. The third season premieres on Netflix today, and if the critics are to be believed, it’s going to be brilliant.
You won’t find any hard-hitting investigative journalism in “Light & Magic,” a six-part documentary about Disney’s special effects unit — Industrial Light & Magic — produced by Disney for Disney’s very own streaming service. Still, if you’re a “Star Wars” fan, VFX geek or even just interested in how some of Hollywood’s biggest blockbusters have been made, this is a great oral history about the most influential gang of movie magic wizards. Plus, watching the entire show gives you a great idea of how far Hollywood has come, all the way from the miniature models built for the first “Star Wars” to the virtual production sets of “The Mandalorian.”
Julie Otsuka’s “The Swimmers” is one of those novels that seems to be about nothing in particular, as it casually talks about the lives of a group of swimming pool attendees who have little in common but their shared love for laps. Then it zooms in on one of those swimmers and suddenly becomes about everything: life, aging, memories and the fragility of it all. I almost gave up on this book a few dozen pages in because it seemed too inconsequential, but I’m so glad I stuck with it. Because just like doing all those laps you set out to swim, it’s ultimately immensely rewarding.
Formerly known as BoxVR, FitXR has been a popular fitness app on the Quest for some time. Boxing is still a key part of the app’s workout regimen, but players can now also opt for dance or high-intensity interval training to get their blood pumping. This week, FitXR introduced a mobile companion app that helps you track and follow through on your workout goals, find new classes and even review detailed stats about past workouts. If that sounds like serious business, don’t worry: The VR workouts themselves are immersive and fun enough to make breaking a sweat feel like a game.
A version of this story also appeared in today’s Entertainment newsletter; subscribe here.
Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety’s first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.
Moses, a former FBI tech leader and one-time AWS customer, thinks Amazon’s culture of ownership helps him secure AWS because executives are taught that they are directly responsible for the security of their services.
“That mental model, that starting from scratch building and continuing to do so and never wavering … that model is why we are the most secure.”
Donna Goodison (@dgoodison) is Protocol’s senior reporter focusing on enterprise infrastructure technology, from the ‘Big 3’ cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.
AWS customers are used to hearing about the cloud provider’s “shared responsibility” model when it comes to security, which means that while AWS promises customers it won’t allow its servers and networks to be compromised, customers still have to do the work of securing their own applications. Inside the company, however, the buck stops with the head of each service offered by AWS.
“Service leaders are responsible for the profit/loss, success/failure and, most of all, the security,” said CJ Moses, AWS’ chief information security officer (CISO) since January. “There are no excuses or finger pointing, so leaders don’t leave security success to chance, but rather actively own it.”
Moses, who has worked at AWS for nearly 15 years, previously led the technical analysis of computer and network intrusion efforts for the FBI’s cyber division and was a computer crime investigator as a special agent with the Air Force Office of Special Investigations. At AWS, he spent more than five years running and building its secure government offerings, including AWS GovCloud and the U.S. Intelligence Community cloud under its Commercial Cloud Services (C2S) contract.
Now as CISO, Moses is responsible for security across AWS’ cloud platform, leading product design and development, security engineering and strategy. He hosts a weekly security review meeting with AWS CEO Adam Selipsky and his senior vice presidents and select vice presidents.
“This meeting is the mechanism that enforces the culture that security is ‘job zero’ at AWS,” Moses said. “People are held accountable for resolving open issues, and strict timelines are adhered to for resolution.”
Moses also holds weekly application security review meetings with all of AWS’ service team owners. New services will not launch if there are any known security issues open, he said, but delaying a launch is very rarely required.
“Our security teams are deeply engaged with new services and new feature development from the beginning,” he said in a recent interview with Protocol. “A highly collaborative, as opposed to oppositional, culture when it comes to security reinforces the trust between service teams and security teams.”
This interview has been edited and condensed for clarity.
What are your duties as CISO?
It really comes down to making sure that we have the right tools, techniques, processes and people in place from the start, shifting as far left as we possibly can — meaning that security is part of the design of the things that we’re making. And not only security in mind from the design standpoint, but the protections that you can put in place, detective or otherwise.
If you have a scanner that’s running across your code after it’s already been written, that means that you didn’t catch it in the design or the initial coding phase. Every possibility that you can have to move further and closer to where code is being written by individuals or even further into the design phase means [reduced overhead], both from a development time as well as from a security perspective, to the overall process. Finding an issue after something’s gone into production and is public, and you have a CVE and all of that process, it’s very expensive to then mitigate that and to patch. We’ve moved as far to the left as we can and mechanized things.
One of the things this year that we found is that moving a lot of the code analysis straight into — before there’s ever even security reviews officially — the builder space, into the developer environments that they use, so that things are getting fixed before security officially would kick in and do reviews of the software. The good part of that is the developers are then catching it as it happens and then changing it is an education for them. They’re like, “Oh, it caught that I did this. This is an anti-pattern that I shouldn’t do,” and then they don’t do it again. And the percentages of increased capacity, if you will, is huge there because, once again, further left that we can shift stuff.
My goal, in the fullness of time, would be to put our operations or responsive operations out of business. It’s not a real possible goal, but as much as we can to move things to the left so that we’re finding them earlier, remediating them when they can be most impactful and people learn from that so that you don’t have them happen again, the better situation we’ll be in across the board.
[The] last thing that any software development engineers like to do is repetitive, boring stuff. And the more that we can make it an automated process earlier on, the less impactful it is to their timelines of developing and deploying innovative new services or features. So far, the feedback from the teams themselves is positive. And that’s really what I like is that you’re making a security impact, but you’re also making the developers and the teams themselves that are trying to build new capabilities for AWS users, you’re making them happy. It goes back to … making security the path of least resistance.
What is AWS’ security strategy? Microsoft seems to be throwing things at customers all the time, and Google Cloud has said it wants to provide end-to-end security.
Overarchingly, we’ve always wanted to work backwards from the customer. We don’t want to tell the customer what they need to do. We actually want to work backwards from them to understand what their needs are. And we’re hearing from customers on a regular basis that … they see our cloud as being the most secure cloud there is. But they also want the ease of use to be able to be secure in the cloud as well. [At the AWS re:Inforce conference last month, vice president of AWS platform Kurt Kufeld] announced a lot of features and functionalities moving towards making it even easier, rather than only having these services that you can put together in order to have a comprehensive environment — to pull those services together into solutions that meet business needs without having to have a lot of the work in the customer space.
What you’ll see from us more going forward, and you have kind of seen already, is creation and bringing together of security services on top of the normal AWS services that we have in order to create that “easy-button” experience of being able to deploy and operate in AWS and do so in a fashion that is the path of least resistance. It’s make it easier for our customers to be innately secure when operating in the space.
Many times we get asked from customers to share our threat intelligence with them. Obviously, at scale, it’s very difficult to pick up the phone and call a million-plus customers to say this is what we’re seeing. As threat intelligence — things we learn by operating in the space — is found, we’re pushing those things directly as near real time as we possibly can into the services we’re offering to customers. GuardDuty, as a threat detection service, is an obvious one; Security Hub and others, as part of that, are there. The idea is that as we’re finding stuff, there shouldn’t be the lag or delay to our customers of taking that intelligence that we’ve gathered or that experience … and pushing it into those products.
So these days, when I get asked, “Will you engage in a threat intel sharing agreement,” I say, “Turn on GuardDuty — just turn it on and use it.” And then all of the back-end process of sharing and then figuring out how you get it in your system and your own network, it’s just not there anymore. You don’t need to do that. We’re going to do it for you. The model that works going forward is automation mechanizing. And that’s, from our big picture, continuing to move down that path of having AWS be not only the most secure, but the simplest to be secure by default.
AWS isn’t always known for being the easiest service to use.
I know. But the reality is that we are the blank canvas that allows you to create the Mona Lisa. And what we want to do now is to be able to allow you still to paint the Mona Lisa, but in this case to be able to do so in a secure fashion.
You bring up the simplicity. I think that you’ll see across AWS that the focus is to continue down the path of being the most capable cloud provider in the world, to have the most ubiquitous security and other capabilities, at the same time, increasing the simplicity of being able to do so. There’s a huge capability that we have there, and we’ve focused on having all the features and all the building blocks for so many years.
There was no other cloud provider that’s ever had that kind of capability built from day one by the paranoid group that we have, with the expertise, that have been chasing hackers around the world.
There has been a transition. Security Hub is a good example specific to the security space, trying to cordon in to where it makes it easier for the security professional to be able to go to one place and see the alerts and things of that nature without having to go to consoles for each of the different offerings. And you’ll see more and more of that over time across AWS, not only in [the] security space, but overall — being able to focus those services towards solutions. That’s one of the things that you see a lot of our ecosystem is strong at. You have providers that are our partners that are taking our services, putting them into very easy-to-use, press a button to provide a solution across the board.
You mentioned customers saying that AWS is the most secure cloud. Do you believe that and why?
I have no doubt in my mind that it is. I came from the FBI to AWS because I was a potential customer. Back when AWS was one region, five services, the security that AWS had day one was the log-in password and user ID from the bookstore — from Amazon’s website. We had a business need at the FBI that we were supporting, and it was a counterterrorism effort, and we had essentially what they call big data today — mining that big data, basically looking for the needle in the needle stack in order to keep bad things from happening to good people. We had a lot of vendors that were out there that wanted to support and did support us. The “gotcha” was that no matter how much of the same stuff we bought, we never were able to make that scale function or that step function [for] Friday at 4:30, [when] the digital truck would back up with more data. And you want the definition of “keeping yourself up at night”? That’s the job, because you know that if you don’t find that needle that bad things are going [to] happen to good people, and it’s going to be your fault.
When EC2 was launched by AWS [in 2006] … the idea of saying, “OK, I can use 1,000 computers for an hour rather than having one computer for 1,000 hours” — the time to value is huge. So we [at the FBI] had some meetings with [AWS chief evangelist] Jeff Barr … and said, “Hey, this is the mission we have, you guys have this,” and they were like, “We want to be able to do that, but we’re not in a position today. We don’t have the infrastructure, the security, the background, all of the features that you’re going to need to do that kind of business on top of us.” The discussion went on for six or eight months and subsequently [former AWS CEO] Andy Jassy, a visionary that he is, said, “Hey, there’s only one way we’re ever going to get to that business, and it’s having people like you join us, bring us into the enterprise out of just individual developers and startups, build those capabilities and take us forward.”
The security story was very weak [on] day one. A handful of us — (former AWS CISO) Steve Schmidt, myself, Andrew Doane and Eric Brandwine — joined in late 2007, and our job was the dedicated utility computing team — the DUC team, also known as the feds [because] you had a bunch of us coming from the FBI. We weren’t given a distinctive thing we had to do other than move us towards the enterprise. We thought about the mission that we had previously and how we could build from scratch the environment that we needed to have in order to be able to do the highly secure work that we were doing. We were paranoid, but we were paranoid for good reason because we did know, in our previous lives, that they were out to get us. So we came into AWS with that mentality and built from scratch day one that foundation. There was no other cloud provider that’s ever had that kind of capability built from day one by the paranoid group that we have, with the expertise, that have been chasing hackers around the world. Built it from bare bones.
Talk about shifting left. We shifted left 15 years. Started with EC2, rewrote basically EC2. The virtual private cloud that’s spoken of today, we created VPC, and it was our first product. We went from being dedicated utility computing to virtual private cloud. We wrote the underpinnings — the virtual network overlay protocol — so we run our own protocol on the network in order to be able to maintain isolation between all of our customers. We started with that and then grew, work, scale, created.
The security culture that you see today is based upon us taking our security mindset and Amazon’s ownership culture, jamming them together in the idea that you have single-threaded owners that own their business beginning to end and including security as part of that, such that at the end of the day, there’s no finger-pointing. If EC2 has a security issue, the owner of EC2 knows it’s their responsibility. It’s also my responsibility to enable them and make sure it doesn’t happen. So we share that responsibility, but straight up, they know that is theirs to own, and they’re going to be the ones … answering to that. That mental model, that starting from scratch building and continuing to do so and never wavering … that model is why we are the most secure. Other cloud providers have created capabilities that really are add-ons to things later on. We started from scratch, built not only the underpinnings of the technology, we built the culture, and every one of the service teams thereafter were built on the security culture that we have created at AWS.
To this point, I think that we’ve represented ourselves pretty well. There’s been a lot of threats and adversaries and other things out there, and we have the longest track record of operating on the internet with all of those things attacking us. Not to mention, we’ve won some pretty big contracts with government entities that are known for being able to identify what is secure and what is not.
What’s the biggest threat to cloud security right now and how do you stay ahead of all these bad actors?
You have to think of it this way: Humans are behind everything that happens. And in cyber threat, you look at it from the standpoint of every threat that comes at you, every adversary there is, there’s actually a human on the other side of that keyboard. I learned this very, very near and dear to my heart chasing hackers around the world.
During the [buildup to] Russia invading Ukraine … you could actually see, from threat intel, known ransomware actors in Russia that were essentially extorting money from people to restore their infrastructure. During that buildup, all of a sudden they went from being ransomware to just “delete everything.” The humans’ intent behind the scenes changed. So from a threat-actor perspective, you have to understand the adversaries and then … put in the protections in order to be able to not only … prevent, but in the case of anything that gets past the prevention, to detect them.
Understanding how those actors act and types of things that they do is important to understanding how you get in front of [it]. Don’t look at what’s going on today, look at what’s going to happen not only tomorrow, look … into the future and build the capabilities there. The idea that we are thinking that far in advance, understanding the long game, allows us to get in front of those threats.
What are the biggest security mistakes that you see enterprise customers repeating?
I don’t think that there’s huge trends and things that customers or enterprises are making. Essentially it comes down to that customers need to be focused on making sure that they’re doing the things that are within their control within the cloud or within the environments. Customers have to take responsibility for that which they have access to. We have created all kinds of tools — access analyzers and other things like that — in order to enable them.
The focus going forward is to be able to establish more and more guardrails. The ability to block all your S3 buckets from not being able to be accessed from the internet is a good example of one of those controls. And as we go forward, you’ll see more and more capabilities like that that you can add from an executive governance level that’ll allow those guardrails to be in place to allow customers to be able to have their developers have that ability to be free and do the innovation that they need to do while also putting the controls in place across the board.
Donna Goodison (@dgoodison) is Protocol’s senior reporter focusing on enterprise infrastructure technology, from the ‘Big 3’ cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.
The OCC is facing calls to pull guidance allowing banks to conduct some crypto-related business.
Lawmakers including Sens. Elizabeth Warren and Bernie Sanders fear crypto could introduce systemic risk to banking without strict guardrails.
As efforts to pass federal crypto legislation are maybe, finally picking up steam in Washington, so, too, is the debate about how traditional banks should approach the sector.
A group of progressive senators including Elizabeth Warren and Bernie Sanders are calling on a federal banking regulator to pull Trump-era guidance that gives banks limited clearance to engage in crypto-related business.
In a Wednesday letter addressed to the Office of the Comptroller of the Currency, the senators pushed forward an ongoing debate over the role banks should play in the crypto ecosystem. Banking industry groups say the regulated institutions can bring stability to the volatile sector. But the lawmakers fear crypto could introduce systemic risk to the broader banking system without strict guardrails.
“In light of recent turmoil in the crypto market … we are concerned that the OCC’s actions on crypto may have exposed the banking system to unnecessary risk,” reads the letter, which was also signed by Sens. Sheldon Whitehouse and Dick Durbin.
Warren circulated a draft version of the letter within the Senate Banking Committee last week, as first reported by Bloomberg and American Banker. The letter calls on the regulator to withdraw previous guidance and start a more comprehensive process “that adequately protects consumers and the safety and soundness of the banking system.”
The OCC’s current guidance was published in late 2020 and early 2021. It gives federally chartered banks clearance to provide crypto custody service, hold cash reserves backing stablecoins and use blockchain technology and stablecoins to verify bank-to-bank payments.
When asked about news reports on the letter, an OCC spokesperson on Tuesday sent Protocol previous comments from acting OCC head Michael Hsu describing the agency’s “careful and cautious” approach to crypto.
Hsu defended the agency’s approach in a comment to Bloomberg when the outlet reported, on Aug. 3, that Warren was circulating a letter calling for the OCC to pull the guidance.
“I think we’re doing a pretty good job. See exhibit A: a whole bunch of stuff just happened, and the banking system is in pretty good shape, knock on wood. I think part of that is the actions we’ve taken,” Hsu told Bloomberg.
The senators cited the bankruptcies of firms Celsius and Voyager, which ran crypto-lending businesses that operated outside of the OCC’s purview. Still, the bankruptcies make “clear that stronger protections are necessary to mitigate crypto’s risks to the financial system and consumers,” the letter reads.
Hsu is a self-described crypto skeptic and promised to review the crypto-related guidance when he took leadership of the OCC in May 2021. The guidance was published under Hsu’s predecessor, Brian Brooks, who is now CEO of crypto company BitFury.
The agency said in November it would keep the provisions in place, with the added caveat that banks must apply to the OCC for a non-objection before engaging in any crypto activity.
But, in the senators’ view, that change does not go far enough.
A banking industry trade group recently argued that limiting banks’ participation in crypto is counterproductive to protecting consumers. A Monday letter from the American Bankers Association to the Treasury Department noted banks are facing restrictions that mostly keep them out of digital assets, while there is still little regulation for non-banks involved in crypto.
“The combination of these two approaches — inaction on the one hand to bring into the regulatory perimeter non-bank crypto companies, and limitation on the other of banks’ ability to engage responsibly in the digital asset market — creates an environment that makes it nearly impossible for responsible financial innovation to occur in this space,” wrote Brooke Ybarra, senior VP of innovation and strategy at the ABA.
The association was responding not to Warren and Sanders’ letter, but to a comment process the Treasury launched in July. The department sought input on, among other things, crypto’s potential impact on markets and major financial institutions, as directed by President Joe Biden’s executive order in March. Biden’s order is built around the idea that the U.S. needs to take a “whole-of-government” approach to regulating crypto.
Banks are not a monolith, and some are more skeptical of crypto than others. Some institutions have explored using blockchain technology for things like settling money transfers. Some are providing services holding custody of crypto assets or customer cash for crypto companies.
OCC-chartered crypto custodian Anchorage Digital said that lawmakers should be focused on bringing more crypto businesses within view of regulators, when asked about the letter.
“If we truly want to protect consumers, we need to pave a workable path forward for regulated institutions to provide crypto services, which was the very intent of the OCC’s guidance,” Anchorage general counsel Georgia Quinn told Protocol.
Warren has certainly voiced support for bringing stricter regulation to crypto as a whole. But consumer protection groups, which generally align with Warren, have flagged crypto’s entry into the traditional banking system as being of particular concern. They say there needs to be clarity beyond the current guidance.
“We don’t really know much about how exposed banks are to crypto risks or how regulators are weighing in,” said Mark Hays, a senior policy analyst on fintech at Americans for Financial Reform. “Given the recent crash, we should, and it would be better if regulators started from first principles and applied the full suite of banking regulations from the outset rather than take the ‘maybe, maybe not’ approach currently in play. “
The senators’ letter calls on the OCC to take up a new process with the Federal Deposit Insurance Corp. and Federal Reserve to clarify how the banks they oversee can engage with crypto. The letter also includes a series of questions about how many OCC-regulated banks are engaging in crypto activities.
The OCC, FDIC and Federal Reserve released a joint statement late last year promising further clarity would come for banks on crypto in 2022 — but guidance since then has been limited. The FDIC recently put out a statement warning banks they must monitor how the crypto firms they partner with advertise the availability of deposit insurance. That concern, plus Warren and Sanders’ attention, could be a signal of additional action coming.
Along with Biden’s executive order, there are several bills aimed at regulating various parts of the industry in the Senate, including one filed in early August that would give the Commodity Futures Trading Commission larger oversight of the industry. While the banking industry is not the focus of those bills, they could help influence how bank regulators approach crypto.
“Just having clarity between what’s a security token and a non-security token would be very helpful,” said Gary DeWaal, chair of Katten Muchin Rosenman LLP’s financial markets and regulation practice. “Over time, once you have a key regulator and in place at the federal level, you’ll have better standards on custody, better standards on cybersecurity — that will benefit the banking regulators, too.”
To give you the best possible experience, this site uses cookies. If you continue browsing. you accept our use of cookies. You can review our privacy policy to find out more about the cookies we use.