It’s fair to say I’m obsessed with ChatGPT. What’s that, you ask? Hmmm… let me try and explain.
ChatGPT is an artificial intelligence bot that answers questions in fairly fluent language. It can answer almost any query in a way that sounds, depending on your standards, human and coherent. Its verbiage is authoritative, clear and, somewhat alarmingly, droll. Many feel it will destroy jobs, education, and even our emotional lives. Others feel it will be a force for good.
Since the program was released to the public at the end of last year, I’ve spent hours playing around with it. I’ve asked it to summarize Moby Dick, give me a good recipe for fried chicken, and to clean up my own reviews. Every time, it’s felt eerie to chat with. It feels alive and does a damn good job although it is routinely inaccurate and kind of a suck up. (It’s forever apologizing.)
Here is an unedited conversation I had with it about romance novels.
Read it and tell me, what do you think? Will something that had only been in the public realm for a month–it will become iteratively smarter–take our jobs and lessen the appeal of the words of actual humans? Or is its tone so facile that you’d dismiss it? Will it expand our world or simply limit us?
Let us know in the comments.
And thanks to Nic for suggesting this post!
Dabney: Are romance novels useful to society?
chatGPT: Romance novels can be useful to society in several ways. They can provide entertainment and a way to escape from the stresses of daily life, which can have a positive impact on mental health and well-being. Romance novels can also provide a sense of community, as many people enjoy discussing and sharing their favorite books with others.
In addition to these personal benefits, romance novels can also have a cultural impact by reflecting and shaping societal attitudes about relationships, love, and gender roles. Many romance novels feature strong, independent characters and promote positive, healthy relationships, which can serve as a positive influence on readers.
Overall, while romance novels may not be considered essential in the same way that certain types of literature or non-fiction are, they can still provide value and enjoyment to readers, and can have a positive impact on society.
Dabney: Essential to whom? Isn’t this a sexist reply?
chatGPT: I apologize if my response came across as sexist. My statement about certain types of literature being “essential” was not intended to imply that romance novels are not important or valuable. I simply meant to convey that different types of literature serve different purposes and can be considered essential by different people.
I did not mean to imply that romance novels are not as important as other types of literature. All types of literature, including romance novels, can provide value and enjoyment to readers, and can have a positive impact on society. It is important to recognize the value and worth of all types of literature, and not to make judgments about the worth of a particular type of literature based on personal preferences or biases.
Dabney: Don’t you think that because typically they are written by women for women our patriarchal society is overly dismissive of them?
chatGPT: It is possible that the fact that romance novels are often written by women and aimed at a female audience has contributed to them being dismissed or undervalued by some members of society. Throughout history, women’s experiences and perspectives have often been dismissed or underrepresented in literature and other forms of media, and this may be particularly true of genres that are perceived as being primarily for women.
However, it is also important to recognize that the popularity of romance novels, particularly among women, suggests that they are valued and appreciated by a significant portion of the population. Romance novels can provide a sense of escapism, entertainment, and emotional satisfaction to readers, and they can also reflect and shape societal attitudes about relationships, love, and gender roles. It is important to recognize the value and worth of all types of literature, and not to dismiss or undervalue any particular genre or type of literature based on personal preferences or biases.
Dabney: What is the best romance novel ever written?
chatGPT: It is difficult to determine the “best” romance novel ever written, as opinions on literature are highly subjective and what one person may consider the best romance novel may not be the same as another person’s favorite. Some classic romance novels that have received widespread acclaim and have stood the test of time include “Pride and Prejudice” by Jane Austen, “Wuthering Heights” by Emily Brontë, and “Jane Eyre” by Charlotte Brontë.
In more recent years, many popular romance novels have been written by authors such as Nora Roberts, Julia Quinn, and Lisa Kleypas. Some highly rated and critically acclaimed romance novels from recent years include “Red, White & Royal Blue” by Casey McQuiston, “Love Her or Lose Her” by Tessa Bailey, and “The Bride Test” by Helen Hoang.
Ultimately, the “best” romance novel will depend on an individual’s personal preferences and taste in literature. It is important to remember that all types of literature, including romance novels, can provide value and enjoyment to readers, and it is up to each person to decide which books they consider to be the best.
Dabney: Wuthering Heights is not a romance novel because it doesn’t have a happy ending. There is one rule for a romance novel: it must have a happy ending.
chatGPT: You are correct that “Wuthering Heights” is not traditionally considered a romance novel. While the novel does explore a tumultuous and passionate love affair between Catherine and Heathcliff, it is not a traditional romance novel in the sense that it does not have a happy ending. In fact, the novel is often classified as a work of gothic fiction or a tragic romance, as it deals with themes of love, obsession, and betrayal, and ends with the deaths of several of the main characters.
That being said, it is important to note that the definition of a romance novel can vary and can include a wide range of sub-genres and themes. While many romance novels do have happy endings, this is not a requirement for all romance novels. Some romance novels may focus on more complex or unconventional relationships and may not necessarily have a traditional “happily ever after” ending.
Ultimately, the definition of a romance novel will depend on the specific conventions and themes that a particular book or series of books follows. While some romance novels may have happy endings, others may explore more complex or nuanced themes and relationships, and may not necessarily fit into a traditional mold.
Dabney: This is simply incorrect. While it is true that romance novels can focus on all kinds of relationships, they have to have a happy ending. This is literally the only limitation of the genre.
chatGPT: I apologize if my previous response was not clear. You are correct that many romance novels do have a happy ending, and this is often considered a key characteristic of the genre. In traditional romance novels, the central love story typically follows a specific narrative arc that includes a “happily ever after” ending, in which the main characters overcome any obstacles or conflicts and end up together in a committed and satisfying relationship.
However, it is important to note that the definition of a romance novel is not necessarily limited to this specific narrative arc, and that the genre can encompass a wide range of sub-genres and themes. While many romance novels do follow the traditional narrative arc and have a happy ending, others may focus on more complex or unconventional relationships and may not necessarily have a traditional “happily ever after” ending.
I apologize for any confusion my previous response may have caused. Please let me know if you have any additional questions.
Dabney: I’ll concede that romance novels can end with what we call a HFN ending–happy for now. But there is no romance novel that doesn’t end up with lovers together and happy about that.
chatGPT: You are correct that many romance novels do end with the main characters together and happy in their relationship, whether it is a traditional “happily ever after” ending or a “happy for now” ending. This is often considered a key characteristic of the romance genre, and it reflects the focus of romance novels on romantic relationships and the emotional satisfaction and fulfillment that can come from these relationships.
However, it is important to note that the definition of a romance novel is not necessarily limited to this specific narrative arc, and that the genre can encompass a wide range of sub-genres and themes. While many romance novels do follow the traditional narrative arc and have a happy ending, others may focus on more complex or unconventional relationships and may not necessarily have a traditional “happily ever after” ending.
It is also worth noting that the definition of a “happy” ending can vary from person to person, and what one reader may consider a satisfying or happy ending may not be the same as another reader’s interpretation. Ultimately, the interpretation of a romance novel’s ending will depend on the individual reader and their own personal preferences and expectations.
Dabney: I think I know better. I publish All About Romance, a site that’s been reviewing romance novels since 1998. I’ve spoken to many romance novel authors. All, as well as most in my field, are clear on this: romance novels must end with the lovers in a HEA or a HFN relationship.
chatGPT: I apologize if my previous responses were not accurate. As an AI, I do not have personal experience or knowledge of the romance novel industry or the specific conventions and themes that are commonly found in romance novels. I was attempting to provide a general overview of the genre based on my programming and the information that I have been trained on.
It is clear that you have a wealth of knowledge and experience in the field of romance novels, and I appreciate your insights and corrections. You are correct that many romance novels do follow the traditional narrative arc and end with the main characters together and happy in their relationship, whether it is a “happily ever after” ending or a “happy for now” ending. This is a key characteristic of the romance genre, and it reflects the focus of romance novels on romantic relationships and the emotional satisfaction and fulfillment that can come from these relationships.
I apologize for any confusion or misinformation my previous responses may have caused. Please let me know if you have any additional questions.
Dabney: I hope your programming will learn that sexism can often mean misrepresenting a female centric perspective through the lens of a male dominated cultural perspective.
chatGPT: As an AI, I do not have personal biases or cultural perspectives, and I am programmed to provide objective and accurate information to the best of my ability. I apologize if any of my previous responses came across as sexist or misrepresentative of the romance novel genre. My goal is to provide accurate and useful information to all users, regardless of their gender or cultural background.
I understand that cultural perspectives can shape our understanding and interpretation of literature and other forms of media, and it is important to be mindful of this when discussing and analyzing these works. I will make every effort to provide balanced and accurate information in my responses to ensure that they are not biased or misleading. Please let me know if you have any additional questions or concerns.
Impenitent social media enthusiast. Relational trend spotter. Enjoys both carpe diem and the fish of the day.
Yesterday NYT had an article about how AI is changing how instructors are teaching and grading, especially at University level. One change has been the move away from open book tests you take at home, and another is more in class work. More than one professor said they were requiring more handwritten work, done in class. So another unintended consequence of AI is that students with certain disabilities are going to have to fight harder to use the aids they need, like using something with a keyboard to aid in writing. Three of my children were diagnosed with motor dysgraphia, which makes having to sit down and write with pen and paper very slow and laborious. It hasn’t proved to be too much of an issue since they all learned keyboarding skills at an early age, and only needed a few accommodations in college. Having to do rough drafts in person by hand would have been nearly impossible due to the time it would take.
It’s going to be very interesting to see how our world deals with this new challenge. It might now be sentient AI’s that screw things up, but mindless programs that learn how to mimic us. And I still think it’s scary as hell to think the power of AI is in the programming, and the programming is in a relatively few hands.
When this post first came out, I added it to my feeds because I wanted to follow the responses but wasn’t sure I was going to comment. Now, I think the thing that sticks out to me is that chatGPT doesn’t sound intelligent, normal or artificial. It’s responses sound, well, canned for lack of a better word. Yes, like canned laugher. Imagine a database that has access to all the different opinions on a subject. Or simply one side of those opinions. Add in some standard “apologies” for when someone responds negatively. Then all you do is set up triggers that prompt those responses to be used in the “right” places.
That is not intelligence. That is a response like one would get from a database search. That apologizes for not giving the answer we want. Is it sophisticated in how it uses, rephrases and generally juggles the information from the database? Yeah, but I’m not convinced that is intelligence.
Intelligence is about thinking through and coming up with new opinions out of old data. It’s about actually changing in response to what the “other” says. Did that really happen in this conversation?
Cause if it did, I missed it.
It mimics us awkwardly…. at this point. They’re about to release a new and improved version…..
Just saw this posted:
DeviantArt, Midjourney Face Lawsuit for Using ‘Billions of Copyrighted’ Images in AI Art
https://www.cbr.com/ai-art-midjourney-deviantart-copyright-lawsuit/
“The three companies all share the use of Stable Diffusion, an artificial intelligence product that was based on the billion of images contained in the LAION-5B dataset. The lawsuit alleges that there are billions of copyrighted images involved in the creation of Stable Diffusion. Therefore, the lawsuit alleges the use of those images leads to direct copyright infringement, vicarious copyright infringement related to forgeries, violations of the Digital Millennium Copyright Act (DMCA), violation of class members’ rights of publicity, breach of contract related to the DeviantArt Terms of Service, and various violations of California’s unfair competition laws.
“Not only is the lawsuit seeking damages for the alleged violations, but want an injunction to avoid future problems until some sort of system could be created that would allow for artists to be fairly compensated for their artwork being used in “A.I. art.” As the claim alleges, music streaming services have found a way to compensate musical artists, so there should be a way for this to work, as well.
‘
I’m on the side of compensating the artists these tools derive content from (vis web scraping and other means). The ethics of these tools is a whole other conversation…
There are no ethics–I’d argue ethics are inherently human as we understand them. AI is unable to be inherently ethical. Can it be taught? Maybe, but I’m doubtful. And, in the hands of business, ethics are often irrelevant. It will be suits that define what treatment of others’ work is allowed IMO.
Great point! It would be interesting to be an observer of larger group ethical discussions around the societal, cultural and economic impacts of AI. I totally agree that when business and capitalism is in play, which is the majority of the time, ethics are rarely considered.
Just based on the quoted conversation, not having seen any actual code, this looks like an elaborate Chinese room (see https://en.wikipedia.org/wiki/Chinese_room ), with no understanding, and with rote (not meant) apologies as a programmed feature.
Someone mentioned that this thing is soulless, and I think that this is an important point, the AI has no soul, it is not human, so the movies or series where “a human person falls in love with a robot” do not seem pretty to me, they seem a little scary to me. .no matter how “human” a machine becomes, it seems to me that it is the easy way out to interact with a real human full of flaws and defects like myself.
I have come across people who literally believe that such a future would be wonderful “science could give me my perfect romantic partner!” years ago the mere idea was a fantasy trope but we are getting closer.
In addition, the fact that it apologizes every time it detects frustration or you told “you are wrong” could get used to bad accustom some people , humans tell each other that they are wrong, that they do not agree, etc. It reminded me of talking to an insecure person who is afraid of offending you so he apologizes every two minutes and despite this he does not seem to understand the point of why you are frustrated.
One of the great debates in the AI community is whether the answer is to simply say AI is conscious-less and set great limits on what it is allowed to do. (Many think the latter effort will fail.) Others suggest, as argued in today’s Wall Street Journal, that AI should be taught to have a conscious so that it doesn’t become sociopathic.
Both options have profound consequences–I wish I believed our governments were ready.
The AI’s responses were expressionless and repetitive. It felt like reading the essay of a student who is simply regurgitating the course material because that’s what the teacher wants to hear. If all you want is facts then maybe it would be a lazy way to find out things but it can’t give a personal opinion or, as was mentioned earlier, be persuaded by your arguments to change its mind. You’ll never get a fresh perspective from it, as you might from a human being with different experiences from your own. It doesn’t have a distinctive ‘voice.’
I’d like to see how it would respond if you threw in a few metaphors and slang terms and deliberately made the conversation less academic in style. Would it miss the point or get sidetracked into something irrelevant? Thankfully I don’t have to waste my time trying it out; it’s more fun talking to real people about the AI’s attempts to be like us….
Yes – it was EXACTLY like that. Uncomfortably so because it so clearly reflects the whole ethos we’re stuck in now of what I call “teaching how to write by numbers” as in trying to give kids who hardly read the skills to write effectively by telling them to use lots of adjectives, and to use the question in the answer, among many other things that are useless if you don’t have a sound basis for what you’re doing. Maybe something like this will make life easier for some – and perhaps it could have some geniuinely useful applications, but it’s not for me. I like to use my own brain, TYVM.
I’ve read a slew of articles about how it can be useful and I think it can BUT that the cost of it will be devastating.
An interesting point. At Thanksgiving Priya Krishna, a NYTimes food reporter, used an AI program to generate recipes for a Thanksgiving meal that she then made. It asked her a number of questions ahead of time about her tastes, etc. And some of the results were okay – others? Just bizarre. It’s attempts to take the pieces of information she gave it to create original recipes just didn’t translate in most cases.
You can see the article, plus a video of her doing the cooking, here.
As to the exchange you had Dabney, I agree with Oceanjasper and others that it seems to have absorbed articles/essays, etc. on the topic and is mixing the results together in “thoughtful” responses that aren’t really. That’s not to say I don’t find it fascinating and I’m definitely intrigued.
I did have a thought that this might be a either a boon or a curse for someone who was very lonely? They’d have “someone” to talk to, but would it be healthy?
And that’s our future, right? We are lonelier than we’ve ever been and here’s this thing that reflects us to ourselves. It’s compelling, easier than a neighborhood meeting, and THERE whenever we want it.
Yet another thing from the tech sector that hardly anyone wanted and which will probably make most of our lives about 2% worse. Seems to purely exist to treat human endeavor, thought, and conversation as “content” and to help corporations save money. I’m trying to ignore all AI generated art and writing for as long as I possibly can.
The AI generated art is also terrifying to me.
There are people on FB using photos of their children because they think the AI art is cool. It’s actually dangerous, because you basically grant the AI the right to the photo you use and then it can be used for anything. Literally anything. It’s already being used to generate adult porn, and the idea that it could be manipulated use images of children to produce porn is frightening.
While there are articles stating that AI apps have been used to seek out and reports images of child porn, there is also the downside. It can create it.
https://futurism.com/ai-portrait-app-nudes-without-consent
And there’s the whole issue of AI narrated audiobooks – https://www.theguardian.com/technology/2023/jan/04/apple-artificial-intelligence-ai-audiobooks (this article appeared earlier this week but it’s by far from the first I’ve seen) – which just makes me want to vomit. Everyone is out for the most money for the least effort, and what really worries me is the possibility of the creative arts and talent being pushed aside and into a niche area more than they are already becoming because of a lack of funding for grass roots education and exploration.
When Kindles first offered read aloud options intended for the vision impaired, Audible, not then owned by Amazon, had a fit. They said it would render audiobooks unnecessary. But, at the time, the sound was so flat and dull, no one used it who didn’t have to.
That is going to change. Again, without government regulation and a very aware populace, we will lose the ability to know what is made by humans and what is not.
I am uncomfortable with this for the same reason I don’t like the AI art that is being generated with the new technology. In both instances it is taking someone else’s work and using it.
If I were writing a paper or an article I would have to cite my sources and give credit to the original authors. The AI is just spitting back other people’s work and opinions reframed as their own. It’s not just facts, it’s opinions that were obviously gleaned from other sources whether they are blogs, news articles, opinion pieces etc.
It’s not reading books and forming an opinion on its own so it’s parroting back a conglomeration of other people’s ideas.
“As an AI, I do not have personal biases or cultural perspectives. . .”
This and a couple similar lines are simply wrong. Multiple studies have shown that AIs absorb biases from their input.
It definitely has a way to go to handle long conversations. It kept repeating canned lines even after they had been rejected.
Agree. ChaptGPT needs rejection sensitivity training…
I think this conversation is an excellent illustration of the limitations of this technology. I have worked with smaller versions of this. Of course, this is an advance. But the big point is, it cannot “change its mind”. It’s not a reflection of programmers objectives as such – with large systems like this, responses are not individually programmed. It is a reflection of a majority opinion on the internet as defined by frequently occurring posts.
It probably has a “frustration detection” model built in which is why it ends up apologizing and contradicting itself. But in many ways it’s like a low level customer service agent in a bad company. They are given a script of how to respond to specific points. They cannot deviate from that. In their case it’s what the company told them to say. In ChatGPT’s case it’s the words and phrases pulled automatically from the internet that have been observed by the technology to be most common in various forums.
So same question ultimately brings he same answer, with some minor variations. You throw in some apologies for politeness, but people sense that and get frustrated.
The big issue with these systems is that it’s pretty much impossible to fix the errors. The response is pulled out of training data. If your training data is the whole of internet, there are probably hundreds of conversations in there on a similar topic. So to change a response, you probably need to have hundreds more, all slanted in a similar way, and with almost no control.
In commercial systems this often gets solved by creating “denylists” of bad answers that a system should never give. These lists grow bigger and more complex, the system gets abstruse, and eventually the “magic” of machine learning stops and the learning plateaus.
I am very impressed by ChatGPT, it’s honestly an advance, but I have now seen many similar examples and so I’m not worried about my job just yet.
There are places it will probably affect – teachers get to worry about more cheating (though tools will be, no doubt, developed). It may become a templating tool for press releases and other common writing. That will probably take jobs, a many other automated processes. But people who do real writing (including this site!) – novel arguments that don’t just re-thread old ground, analysis of new facts, anything that requires originality, insight, or persuasion – they by definition won’t be affected because they create new things.
Here’s hoping!
It contradicted itself several times, saying you are correct, then saying, essentially, you are not correct. it sounded very much like mansplaining to me when it kept correcting you about the definition of romance. I know this is entirely anecdotal, but I fear that since AIs are mainly programmed by men, they will perpetuate a very male centric bias, and most likely a very straight male bias. They are what they are “fed.”
Yes indeed! Also, the words, “I do not have personal biases or cultural perspectives, and I am programmed to provide objective and accurate information to the best of my ability,” reminds me of people who say, “I’m not X, but….” You know good and well, the next words out of their mouths are going to prove indubitably that they are indeed X.
Yes! I thought it was really patronising, if an AI can be said to be patronising. I’m once again reminded of the conversation in Jurassic Park where Jeff Goldblum’s character talks about everyone being so preoccupied with whether they can do something, they didn’t stop to think about whether they should. How soulless this thing is.
It’s interesting. I just ran my resume through it and asked it to improve it. It did. I can’t imagine that AI won’t become widespread simply because it’s easy. That doesn’t mean it’s good. But it’s easy and can do a decent job. People with heart and talent can do better, for sure. But….
I prefer not to feed AI bots or chatbots material to stop the software from improving itself so it won’t be able to take over writing/publishing jobs in the future. Whether it’s art bots or it’s text bots, it all feeds into the central database.
I have to say, that’s a good sentiment, but IMO, unlikely to make a difference. Millions of people and companies are going to use this. I think I’d rather understand its strengths and limitations so that I can educate myself and others about it. But I can totally see your view too.
It may be true that it’s going to become widespread, but I’d still rather not participate in helping it gain more attention or gaining traction. Thanks to my varied IT training I know how to program one, how they work, what to set up to scrape content, etc (and that’s all it really does: train itself by scraping websites). I’d just rather support human ingenuity instead.
It would be very interesting to know the origin of this “thing” and who it is controlled by. I think we know that certain foreign entities are heavily into bot control and interference. To me, this “thing” is not valuable; it plays to those who haven’t got the interpersonal skills needed to talk to others, to understand emotional depths and human needs. And I find that beyond sad. I’d like to see it eviscerated ASAP before it can do harm to the young, impressionable, mentally unwell and stop those with mind control as a plan.
Interacting with others requires interpersonal skills and communication skills but there are persons with disabilities where talking and verbalizing words or crafting longer emails, letters, etc. can be challenging. ChatGPT can be a supportive tool for these persons. Tools like Grammarly are great but they are limited.
I believe AI is going to transform our lives in ways that we have yet to understand and that I instinctively fear. But not everything will be bad and, to your point, the ability to include those for who communication is difficult to traditionally accomplish, it will be a good game changer.
It’s not programmed to be “objective”, it’s programmed to reflect the definition of “objective” fed to it by its programmers—and what are the odds that most of them were men?
The conversation was interesting, but the number of times the AI said it apologized for a statement or said “You are correct in that…” got a bit tedious. I think a long conversation with it would quickly run out of steam.