this post was submitted on 16 May 2024
303 points (100.0% liked)

Privacy Guides

16119 readers
369 users here now

In the digital age, protecting your personal information might seem like an impossible task. We’re here to help.

This is a community for sharing news about privacy, posting information about cool privacy tools and services, and getting advice about your privacy journey.


You can subscribe to this community from any Kbin or Lemmy instance:

Learn more...


Check out our website at privacyguides.org before asking your questions here. We've tried answering the common questions and recommendations there!

Want to get involved? The website is open-source on GitHub, and your help would be appreciated!


This community is the "official" Privacy Guides community on Lemmy, which can be verified here. Other "Privacy Guides" communities on other Lemmy servers are not moderated by this team or associated with the website.


Moderation Rules:

  1. We prefer posting about open-source software whenever possible.
  2. This is not the place for self-promotion if you are not listed on privacyguides.org. If you want to be listed, make a suggestion on our forum first.
  3. No soliciting engagement: Don't ask for upvotes, follows, etc.
  4. Surveys, Fundraising, and Petitions must be pre-approved by the mod team.
  5. Be civil, no violence, hate speech. Assume people here are posting in good faith.
  6. Don't repost topics which have already been covered here.
  7. News posts must be related to privacy and security, and your post title must match the article headline exactly. Do not editorialize titles, you can post your opinions in the post body or a comment.
  8. Memes/images/video posts that could be summarized as text explanations should not be posted. Infographics and conference talks from reputable sources are acceptable.
  9. No help vampires: This is not a tech support subreddit, don't abuse our community's willingness to help. Questions related to privacy, security or privacy/security related software and their configurations are acceptable.
  10. No misinformation: Extraordinary claims must be matched with evidence.
  11. Do not post about VPNs or cryptocurrencies which are not listed on privacyguides.org. See Rule 2 for info on adding new recommendations to the website.
  12. General guides or software lists are not permitted. Original sources and research about specific topics are allowed as long as they are high quality and factual. We are not providing a platform for poorly-vetted, out-of-date or conflicting recommendations.

Additional Resources:

founded 1 year ago
MODERATORS
 

Google’s AI model will potentially listen in on all your phone calls — or at least ones it suspects are coming from a fraudster.

To protect the user’s privacy, the company says Gemini Nano operates locally, without connecting to the internet. “This protection all happens on-device, so your conversation stays private to you. We’ll share more about this opt-in feature later this year,” the company says.

“This is incredibly dangerous,” says Meredith Whittaker, the president of a foundation for the end-to-end encrypted messaging app Signal.

Whittaker —a former Google employee— argues that the entire premise of the anti-scam call feature poses a potential threat. That’s because Google could potentially program the same technology to scan for other keywords, like asking for access to abortion services.

“It lays the path for centralized, device-level client-side scanning,” she said in a post on Twitter/X. “From detecting 'scams' it's a short step to ‘detecting patterns commonly associated w/ seeking reproductive care’ or ‘commonly associated w/ providing LGBTQ resources' or ‘commonly associated with tech worker whistleblowing.’”

you are viewing a single comment's thread
view the rest of the comments
[–] dukethorion@lemmy.world 51 points 1 month ago (5 children)

"...locally on device without connecting to the internet"

How would it then report such behavior to Google, without internet?

If it notifies the end user, what good does that do? My phone is at my ear, I don't stop a conversation when another app sends a notification while I'm on a call.

This will 100% report things in the background to Google.

[–] TheHobbyist@lemmy.zip 22 points 1 month ago (1 children)

You're putting a very large amount of trust on something which may simply require the flip of a switch to add the specified information to be sent back to Google along with all the regular heavy telemetry already feeding back...

[–] Rai@lemmy.dbzer0.com 7 points 1 month ago

Mega hot take on this site: I have no trust in Google

[–] GenderNeutralBro@lemmy.sdf.org 9 points 1 month ago

There are a few ways this could work, but it hardly seems worth the effort if it's not phoning home.

They could have an on-device database of red flags and use on-device voice recognition against that database. But then what? Pop up a "scam likely" screen while you're already mid-call? Maybe include an option to report scams back to Google with a transcript? I guess that could be useful.

Any more more than that would be a privacy nightmare. I don't want Google's AI deciding which of my conversations are private and which get sent back to Google. Any non-zero false positive rate would simply be unacceptable.

Maybe this is the first look at a new cat and mouse game: AI to detect AI-generated voices? AI-generated voice scams are already out there in the wild and will only become more common as time goes on.

[–] smeg@feddit.uk 1 points 1 month ago

I assume it means the "AI" bit is running locally (for cost/efficiency reasons and so your actual voice isn't uploaded) the results are then uploaded wherever (which is theoretically better but still hugely open to abuse)

[–] helenslunch@feddit.nl 1 points 1 month ago* (last edited 1 month ago)

How would it then report such behavior to Google, without internet?

It doesn't

In a demo, the tech giant simulated a scam call involving a fraudster impersonating a bank. A pop-up message appeared, encouraging the user to hang up.

If it notifies the end user, what good does that do?

You can't see why it might be helpful for a user to know that they're speaking to a scammer?

[–] fruitycoder@sh.itjust.works 1 points 1 month ago

My bet is it will work like their federated text prediction in gboard.