Smart wife sex, in its simplest form, can mean giving digital assistants different personalities that more accurately represent the many versions of femininity that exist around the world, as opposed to the fun and submissive persona that many companies have chosen to adopt.
Stringers adds that Q would be a fair case of what these devices could look like, “but that cannot be the only solution.” Another option could be to bring masculinity in different ways. One example might be Pepper, a human-like robot developed by Softbank Robotics who is often credited with pronouns, and is able to recognize basic human faces and emotions. Or Jibo, another robot, introduced in 2017, that also used masculine pronouns and was marketed as a social robot for the home, although it has since been given a second life as a device focused on healthcare and education. Given the “gentle and androgynous” masculinity performed by Pepper and Jibo – for example, the former answering questions in a polite manner and often presenting a flirtatious appearance, and the latter often spinning awkwardly and approaching users with endearing behavior – Strengers and Kennedy see them as positive steps in right direction.
The use of digital assistants in Queering could create robot personalities to replace human concepts of technology. When asked about his gender, Eno, Capital One’s baking robot launched in 2019, will reply comically: “I’m binary. I don’t mean I’m both, I mean I’m actually just ones and zeros. Consider me a robot.”
Likewise, Kai, an online banking chatbot developed by Kasisto – an organization that builds artificial intelligence software for online banking – is completely abandoning human characteristics. Jacqueline Feldman, the Massachusetts-based writer and user experience designer who created Kai, explained that the robot was “designed to be genderless.” Not by assuming a non-binary identity, as Q does, but by assuming a robot-specific identity and using the pronouns ‘he’. “From my point of view as a designer, a robot can be beautifully and charmingly designed in new ways of its own, without pretending to be human,” she says.
When Kai was asked if he was a real person, he replied, “A bot is a bot is a bot. Next question, please,” is a clear indication to users that it is not human and does not pretend to be. And if you are asked about sex, you will answer, “As a robot, I am not a human. But I am learning. This is machine learning.”
The identity of the bot does not mean that Kai is being abused. A few years ago, Feldman was too We talked about Intentionally designing Kai with the ability to deflect and shut down inconveniences. For example, if a user repeatedly harasss the bot, Kai will respond with something like “I’m picturing white sand and a hammock, please try me later!” “I really did my best to give the robot some dignity,” Feldman Tell Australian Broadcasting Corporation in 2017.
However, Feldman believes that there is a moral duty for robots to define themselves as robots. There is a lack of transparency when companies design [bots] They make it easier for a person interacting with a bot to forget that it’s a bot,” she says, and configuring bots or giving them a human voice makes this even more difficult. Since many consumer experiences with chatbots have It can be frustrating Many people prefer talking to someone, Feldman believes that endowing robots with human traits can be a case of “overdesigning”.