Alphabet Inc’s Google introduced new ad-buying tools on 10 July that showcase its growing push to allow machines instead of humans to fine-tune ads and determine where they should run.
Advertisers have welcomed the advanced software, which could encourage them to spend more on Google as it makes more efficient use of their money. But consumer privacy and technology analysts are watching the shift with concern and a push for more regulatory scrutiny may be coming.
In Europe, the month-old General Data Protection Regulation requires end users to consent to being the subject of some forms of automated decision making. The rule also requires transparency about data involved and an effort to prevent bias, though what is covered is likely to be litigated.
Google’s new ad services are developed with machine learning, in which software analyzes old sets of starting conditions and end results and then decides how to maximize a certain result based on new, real-time conditions.
Google said its machine learning now can predict when to show ads so that given a certain budget, it can maximize foot traffic to stores or favorable consumer sentiment of a brand.
It also announced broader availability of a tool that automatically chooses the best text for ads in Google search results from an advertiser-created list of up to 19 phrases.
Users making the same query might see different versions of an ad “based on context,” the company said in a blog post on 10 July as it opens its annual conference for advertisers.
Critics fear that machine learning increases risks of discrimination and privacy intrusions in advertising. Machines can learn to prey on vulnerable individuals or withhold offers to people based on sensitive traits such as race.
Google does not allow targeting ads to users based on race, but its “algorithms might be doing it by proxy unbeknownst to the company” by relying on other information that approximates race, said Dipayan Ghosh, a Harvard University fellow and former public policy staffer at Facebook Sridhar Ramaswamy, Google’s senior vice president for ads, told Reuters last month that the company has researched “fairness” in machine learning extensively but it is “not a solved problem.” He said Google has begun checking for biases using test data with some algorithms, including one that determines which YouTube videos are suitable for advertising. Balancing privacy with business goals is another focus. Machine learning helps Google more effectively analyze user data to measure store visits and intention to purchase an item.
But assessing how advanced systems treat user privacy can be difficult without details on how the decision-making works, said Marc Rotenberg, president of the Electronic Privacy Information Center.
“Algorithmic transparency is key to accountability,” he said.