SAN FRANCISCO — The maker of ChatGPT is trying to curb its reputation as a freewheeling cheating machine with a new tool that can help teachers detect if a student or artificial intelligence wrote that homework.
The new AI Text Classifier by OpenAI follows a weeks-long discussion at schools and colleges over fears that ChatGPT’s ability to write just about anything on command could fuel academic dishonesty and hinder learning.

The logo for OpenAI, the maker of ChatGPT, appears on a mobile phone, in New York.
OpenAI cautions that its new tool — like others already available — is not foolproof. The method for detecting AI-written text “is imperfect and it will be wrong sometimes,” said Jan Leike, head of OpenAI's alignment team tasked to make its systems safer.
“Because of that, it shouldn’t be solely relied upon when making decisions,” Leike said.
Teenagers and college students were among the millions of people who began experimenting with ChatGPT after it launched Nov. 30 as a free application on OpenAI's website. And while many found ways to use it creatively and harmlessly, the ease with which it could answer take-home test questions and assist with other assignments sparked a panic among some educators.
People are also reading…
By the time schools opened for the new year, New York City, Los Angeles and other big public school districts began to block its use in classrooms and on school devices.
The Seattle Public Schools district initially blocked ChatGPT on all school devices in December but then opened access to educators who want to use it as a teaching tool, said Tim Robinson, the district spokesman.
“We can’t afford to ignore it,” Robinson said.
The district is also discussing possibly expanding the use of ChatGPT into classrooms to let teachers use it to train students to be better critical thinkers and to let students use the application as a “personal tutor” or to help generate new ideas when working on an assignment, Robinson said.
School districts around the country say they are seeing the conversation around ChatGPT evolve quickly.
“The initial reaction was ‘OMG, how are we going to stem the tide of all the cheating that will happen with ChatGPT,’" said Devin Page, a technology specialist with the Calvert County Public School District in Maryland. Now there is a growing realization that “this is the future” and blocking it is not the solution, he said.
“I think we would be naïve if we were not aware of the dangers this tool poses, but we also would fail to serve our students if we ban them and us from using it for all its potential power,” said Page, who thinks districts like his own will eventually unblock ChatGPT, especially once the company's detection service is in place.
OpenAI emphasized the limitations of its detection tool in a recent blog post, but said that in addition to deterring plagiarism, it could help to detect automated disinformation campaigns and other misuse of AI to mimic humans.
The longer a passage of text, the better the tool is at detecting if an AI or human wrote something. Type in any text — a college admissions essay, or a literary analysis of Ralph Ellison’s “Invisible Man” — and the tool will label it as either “very unlikely, unlikely, unclear if it is, possibly, or likely” AI-generated.
But much like ChatGPT itself, which was trained on a huge trove of digitized books, newspapers and online writings but often confidently spits out falsehoods or nonsense, it’s not easy to interpret how it came up with a result.
“We don’t fundamentally know what kind of pattern it pays attention to, or how it works internally,” Leike said. “There’s really not much we could say at this point about how the classifier actually works.”
Higher education institutions around the world also have begun debating responsible use of AI technology. Sciences Po, one of France’s most prestigious universities, prohibited its use last week and warned that anyone found surreptitiously using ChatGPT and other AI tools to produce written or oral work could be banned from Sciences Po and other institutions.
In response to the backlash, OpenAI said it has been working for several weeks to craft new guidelines to help educators.
“Like many other technologies, it may be that one district decides that it’s inappropriate for use in their classrooms,” said OpenAI policy researcher Lama Ahmad. “We don’t really push them one way or another. We just want to give them the information that they need to be able to make the right decisions for them.”
It’s an unusually public role for the research-oriented San Francisco startup, now backed by billions of dollars in investment from its partner Microsoft and facing growing interest from the public and governments.
France’s digital economy minister Jean-Noël Barrot recently met in California with OpenAI executives, including CEO Sam Altman, and a week later told an audience at the World Economic Forum in Davos, Switzerland, that he was optimistic about the technology. But the government minister — a former professor at the Massachusetts Institute of Technology and the French business school HEC in Paris — said there are also difficult ethical questions that will need to be addressed.
“So if you’re in the law faculty, there is room for concern because obviously ChatGPT, among other tools, will be able to deliver exams that are relatively impressive,” he said. “If you are in the economics faculty, then you’re fine because ChatGPT will have a hard time finding or delivering something that is expected when you are in a graduate-level economics faculty.”
He said it will be increasingly important for users to understand the basics of how these systems work so they know what biases might exist.
How AI predicts what you’ll buy
How AI predicts what you’ll buy

It’s a jungle out there—few places so much so as the world of “smart” advertising.
There, marketing geniuses have developed increasingly sophisticated algorithms that take all the information gathered about you online or from your phone and piece together a customer profile that may include everything from your favorite pair of socks to your children’s names.
Analyzing current market practices, Wicked Reports explored how artificial intelligence, or AI, can be wielded to gather data and make sales predictions across the internet. Some techniques you may know, such as persistent cookies that turn your computer into a ping hub for the websites you visit. Others are much more sophisticated, compiling all of your characteristics by analyzing what you’ve bought in the past, what you’ve put in your cart and abandoned, and what you’ve searched for. From there, advertisers can even make a version of similar customers to market to them as well.
The digital advertising industry is expected to crest $20 billion in 2022. That’s far from enough to crack the top 10 biggest industries in the U.S., but it’s a substantial amount of money—particularly when compared to the big-ticket ad buys of the past in splashy magazine spreads. Companies today are more eager than ever to spend what it takes to bring in ideal customers.
Continue reading to discover some of the tactics AI uses to predict buying behaviors.
Compiling user movement across the web

You may know about cookies: tiny text files that websites deposit on your computer as a way to track online behavior.
When you visit websites from Europe, for example, a law there mandates that you click through a cookie agreement that’s much more transparent than in the U.S. There are session cookies lasting one browsing “session” (until you restart your computer or browser) and persistent cookies that stay until you delete them. Think of a cookie as a waving arm each time you visit the same website. Together, they form a heat map of how often and when you visit every website in your browsing history. They can even flag your presence to other websites as a way to combine your data.
Identifying user characteristics

User characteristics, and something called demographic segmentation, is a key way online advertising targets you. User characteristics are any of your qualities, from your gender and age to what car you drive and the pets you own. These user characteristics lead to the advertising concept of demographic segmentation, in which companies can buy lists of really specific people.
Are you a 25-year-old white man with one dog, a full-time job as an auto tech, and an apartment rental in a “transitional” neighborhood? We have just the plaid shirt for you.
Mapping user location data

If you’ve used GPS in your smartphone or any of the hyperlocal dating apps, you’ve leveraged location data to your advantage—at least for now.
How does your phone know where you are? Cellphone towers ping your phone when you’re nearby. In your home, your Wi-Fi network is likely hardcoded with your location. That’s also true of any Wi-Fi network you hop into or onto during your errands, at school, at work, and so forth. After that, GPS can pinpoint your phone to an alarmingly small area as you carry it around, so not just in your home but in one corner of one room.
Matching new users to known customers who look and act in similar ways

Some items on this list are not very surprising, or we’re used to being told about them so they don’t seem as insidious and scary as they once did. But people are likely still surprised by the depths that companies will go to in order to better advertise to you. Your favorite clothing store, for example, might put together a complete data “picture” of you: what you’ve purchased from them, what size you shop for, where your address is, and more. Then they can reverse engineer someone just like you and buy a demographically matching list.
Anything can be filtered until just the exact desired customer base remains, and then they buy the ads.
IP address targeting by network connection

How much do you know about your IP address? Many of us are old enough to remember a time when connecting to the internet required knowing a specific IP address and typing it into our PC settings.
Today, the router you likely have in your home has a hard-coded IP address whose number values reflect where you are as well as which “node” you have on your local network. That information may be for sale to different companies because, with the right technology, they can use some IP addresses in order to infer the rest—and guess where you live. Apple is among the tech companies pushing back on IP targeting of this nature by masking IP addresses in its proprietary browser Safari.
This story originally appeared on Wicked Reports and was produced and distributed in partnership with Stacker Studio.