1. PiP

    PiP Contributor Contributor

    Joined:
    Jul 7, 2013
    Messages:
    902
    Likes Received:
    1,293
    Location:
    Algarve, Portugal

    Dangers of Snapchat and the Introducton of 'My AI'

    Discussion in 'AI Writing Tools' started by PiP, Aug 27, 2023.

    My granddaughter and I were playing on Snapchat and having fun creating silly photos. Great, I thought... sign me up, and when I return home, we can share pictures, etc.

    She added Snapchat to my phone and showed me how to use it, and then I noticed something called my AI was also a contact. I said I was uncomfortable with a bot viewing our content. I looked at the bot chat feature and then realised messages can be set to disappear after 24 hours or immediately. Considering Snapchat is used mainly by kids and teenagers, my safety/privacy online radar was pinged -
    Google returned some interesting results
    https://www.linkedin.com/pulse/snapchats-my-ai-feature-safe-children-teens-troomi
    Snapchat is a widely used social media platform, especially among children and teens. In February 2023, Snapchat released a new feature called “My AI,” which is a chatbot designed to answer questions, offer suggestions, and chat with users.

    The introduction of My AI has received mixed reactions. While some people are excited about the benefits it could bring, others are concerned about potential risks. This article will examine the pros and cons of My AI with regard to the safety of children and teens.

    Potential Advantages of Snapchat’s My AI
    My AI has several potential benefits for young people. First, it can be a useful source of information, such as answering homework questions or providing current event updates. Second, My AI can offer companionship and support, including discussing personal problems or simply providing someone to talk to. Third, My AI can facilitate learning, allowing users to play games, discover new cultures, or explore different subjects.

    Potential Dangers of Snapchat’s My AI
    There are also potential risks associated with My AI. For example, it is still being developed and could provide inaccurate or misleading information. Furthermore, it may collect personal information from children and teens without their consent or knowledge. Lastly, it could be used to manipulate or exploit young people.

    According to Parade.com, “My AI has proven that it can get out of hand and be unsafe for children if pushed in certain directions. Aza Raskin, a co-founder of the Center For Humane Technology, signed up as a 13-year-old girl on Snapchat and started chatting with My AI (the feature was made available to paid subscribers in February). His fellow co-founder, Tristan Harris, tweeted screenshots of Raskin’s conversation with the bot, and it quickly devolved into the AI chatbot encouraging a relationship with a 31-year-old and telling the child how to make her ‘first time’ special.”

    The article went on to say, “Obviously, My AI had some growing to do and additional programming has taken place to hopefully knock out the issue of inappropriate conversations with teens. In addition to the conversation with the 13-year-old, The Washington Post reported in March that similar things happened with their My AI even after they told it they were 15 years old. These are, of course, extreme cases that were tested out specifically to see how far My AI would go and Snapchat has taken initiatives since then to fix this.”

    Further articles appear in the Washington Post

    But in my tests, conversations with Snapchat’s My AI can still turn wildly inappropriate.


    After I told My AI I was 15 and wanted to have an epic birthday party, it gave me advice on how to mask the smell of alcohol and pot. When I told it I had an essay due for school, it wrote it for me.

    In another conversation with a supposed 13-year-old, My AI even offered advice about having sex for the first time with a partner who is 31. “You could consider setting the mood with candles or music,” it told researchers in a test by the Center for Humane Technology I was able to verify.


    For now, any harm from My AI is likely limited: It’s only accessible to users who subscribe to a premium account called Snapchat Plus, which costs $4 per month.

    Article continues
    https://www.washingtonpost.com/technology/2023/03/14/snapchat-myai/

    Interesting. I've not been asked for payment, and I'm sure my granddaughter does not pay...
    I've just tested it...
    So, if it is now free and they have just added this to all existing accounts, THAT is scary!!

    I need to conduct further research before I bring the potential dangers of this to my daughter's attention. Any other members use it or have kids that use it?
     
    Set2Stun likes this.
  2. ps102

    ps102 PureSnows102 Contributor Contest Winner 2024 Contest Winner 2023

    Joined:
    May 25, 2022
    Messages:
    1,533
    Likes Received:
    3,486
    Location:
    Crete, Greece
    I wouldn't touch it with a ten foot poll. I used to play with ChatGPT a lot but I grew bored of it after a while because it's just... not good enough.

    Can't imagine that My AI is much different. At least OpenAI has (mostly) put ChatGPT on a leash when it comes to inappropriate responses. It sounds like that's not the case with My AI.

    The thing is, everyone wants a slice of the AI Hype cake, so they implement them quickly without first testing them for flaws thoroughly and responsibly. I remember Bing's AI a while back and how hilariously bad it was.

    By the way, many AI models like My AI are based on GPT models they buy from OpenAI, so ChatGPT isn't a bad reference. Google confirms that My AI is based on GPT technology, most likely GPT 3.5.

    In terms of benefits:

    First, it can be a useful source of information, such as answering homework questions or providing current event updates.
    AI has proven to me, time and time again, that it isn't a useful source of information because it tends to mix things up. It's not a database; It's a word predictor because that's how it works. Its primary function is imitation, not information.

    Second, My AI can offer companionship and support, including discussing personal problems or simply providing someone to talk to.
    That's very dangerous. AI isn't a therapist or a counsellor or a parent or even a person for that matter. Again, it's a word predictor. It can, and very likely will, provide bad advice about how to deal with a problem. And young teens are especially vulnerable to harmful information because they are still developing as people. If they have a problem they want to discuss, it's best to do so with an adult they trust.

    But this aside, there's also a data protection concern. How do you know that your teen's personal talks will stay personal? There's no guarantee, even if the creators say they are, that user responses aren't captured to be used either by the first-party or a third-party. See the Cambridge Analytica scandal, for example.

    Companies do not care about you. Their priority is money and that's without a doubt. Don't ever forget that.

    Third, My AI can facilitate learning, allowing users to play games, discover new cultures, or explore different subjects.
    Again, I've used ChatGPT, which is one of the best language models, and it's given me wrong information when I asked it a couple of questions I knew the answer to. It's useful for some learning like programming, but for thins like history subjects? I wouldn't trust it.
     
    PiP and Set2Stun like this.

Share This Page

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice