Technology

37599 readers
312 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
51
52
 
 

Earlier this year, the Australia's eSafety commissioner took X to court over its refusal to remove videos of a religiously motivated Sydney church stabbing for its global users.

The case was ultimately dropped, but commissioner Julie Inman Grant says she received an "avalanche of online abuse" after Mr Musk called her the "censorship commissar" in a post to his 196 million followers.

[...]

A Columbia University report into technology-facilitated gender-based violence - which used Ms Inman Grant as a case study - found that she had been mentioned in almost 74,000 posts on X ahead of the court proceedings, despite being a relatively unknown figure online beforehand.

According to the analysis, the majority of the messages were either negative, hateful or threatening in some way. Dehumanising slurs and gendered language were also frequently noted, with users calling Ms Inman Grant names such as "left-wing Barbie", or "captain tampon".

[...]

Ms Inman Grant said that Mr Musk's decision to use "disinformation" to suggest that she was "trying to globally censor the internet" had amounted to a "dog whistle from a very powerful tech billionaire who owns his own megaphone".

She said that the torrent of online vitriol which followed had prompted Australian police to warn her against travelling to the US, and that the names of her children and other family members had been released across the internet.

[...]

The case turned into a test of Australia's ability to enforce its online rules against social media giants operating in multiple jurisdictions – one which failed after a Federal Court judge found that banning the posts from appearing on X globally would not be “reasonable” as it would likely be "ignored or disparaged by other countries".

In June, Ms Inman Grant's office said it would not pursue the case further, and that it would focus on other pending litigation against the platform.

X's Global Government Affairs team described the outcome as a win for "freedom of speech".

53
 
 

It could also identify your voice and recognize you and your ad preferences, and those of your passengers.

Why...

54
55
 
 

Archived link

TIDRONE, a threat actor linked to Chinese-speaking groups, targets military-related industry chains in Taiwan

  • TIDRONE, an unidentified threat actor linked to Chinese-speaking groups, has demonstrated significant interest in military-related industry chains, especially in the manufacturers of drones’ sector in Taiwan

  • The threat cluster uses enterprise resource planning (ERP) software or remote desktops to deploy advanced malware toolsets such as the CXCLNT and CLNTEND.

  • CXCLNT has basic upload and download file capabilities, along with features for clearing traces, collecting victim information such as file listings and computer names, and downloading additional portable executable (PE) files for execution

  • CLNTEND is a newly discovered remote access tool (RAT) that was used this April and supports a wider range of network protocols for communication

  • During the post-exploitation phase, telemetry logs revealed user account control (UAC) bypass techniques, credential dumping, and hacktool usage to disable antivirus products.

56
57
 
 

Archived version

Two days after U.S. authorities accused two employees of Russian state media network RT of coordinating an online network aimed at influencing the 2024 presidential election, more than 400 posts by Tenet Media, the online content company at the heart of the case, were still accessible on TikTok, unlabeled and untouched.

So too were Tenet Media's nearly 2,500 Instagram videos and more than 4,000 posts on social network X, along with its posts on Facebook and video platform Rumble.

Of all the major platforms where Tenet distributed its videos, so far only Alphabet's YouTube has taken action penalizing the company, pulling down the main Tenet Media channel along with four others operated by owner Lauren Chen on Thursday.

[...]

The platforms' apparent inaction on the campaign is a striking departure from the aggressive efforts they have touted in recent years to expose secretive foreign propaganda campaigns, reflecting both the novelty of the tactics allegedly used and the fraught politics of policing content posted by real people inside the United States.

It also exposes a fresh challenge faced by the platforms as Russia increasingly turns to unwitting American social media stars to covertly influence voters ahead of U.S. elections this year, a sort of digital update to Cold War-era practices of laundering messages through journalists or front media outlets, according to disinformation researchers

"What we're ultimately grappling with is a problem that exists in the real world. It's manifesting on social media in the sense that the entity has a presence there, but it isn't a social media problem per se," said Olga Belogolova, a disinformation professor at Johns Hopkins School of Advanced International Studies and former head of influence operations policy at Meta.

[...]

58
59
60
61
 
 

Tropic Trooper (also known as KeyBoy and Pirate Panda) is an APT group active since 2011. This group has traditionally targeted sectors such as government, healthcare, transportation and high-tech industries in Taiwan, the Philippines and Hong Kong. Our recent investigation has revealed that in 2024 they conducted persistent campaigns targeting a government entity in the Middle East, starting in June 2023.

Sighting this group’s TTPs in critical governmental entities in the Middle East, particularly those related to human rights studies, marks a new strategic move for them. This can help the threat intelligence community better understand the motives of this threat actor.

The infection came to our attention in June 2024, when our telemetry gave recurring alerts for a new China Chopper web shell variant (used by many Chinese-speaking actors), which was found on a public web server. The server was hosting an open-source content management system (CMS) called Umbraco, written in C#. The observed web shell component was compiled as a .NET module of Umbraco CMS.

In our subsequent investigation, we looked for more suspicious detections on this public server and identified multiple malware sets. These include post-exploitation tools, which, we assess with medium confidence, are related to and leveraged in this intrusion.

Furthermore, we identified new DLL search-order hijacking implants that are loaded from a legitimate vulnerable executable as it lacks the full path specification to the DLL it needs. This attack chain was attempting to load the Crowdoor loader, which is half-named after the SparrowDoor backdoor, detailed by ESET. During the attack, the security agent blocked the first Crowdoor loader, prompting the attackers to switch to a new, previously unreported variant, with almost the same impact.

62
 
 

Here is the indictment and press release by the U.S. Department of Justice.

The indictment of two employees of RT - formerly 'Russia Today', a Kremlin-controlled propaganda outlet based in Moscow - includes allegations that they implemented a nearly $10 million plan to fund a U.S.-based company as one of their “covert projects.”

Employees of the Russia-backed media network RT funded and directed a scheme that sent millions of dollars to prominent right-wing commentators through a media company that appears to match the description of Tenet Media, a leading platform for pro-Trump voices [...]

The indictment on Wednesday of two RT employees, Konstantin Kalashnikov and Elena Afanasyeva, includes allegations that the duo implemented a nearly $10 million plan to fund an unnamed Tennessee-based company as one of their “covert projects” to influence American politics by posting videos to TikTok, Instagram, X and YouTube.

[...]

[Involved apoear to be] six commentators: Lauren Southern, Tim Pool, Tayler Hansen, Matt Christiansen, Dave Rubin and Benny Johnson. The indictment refers to six commentators, who are not named.

[...]

Details included in the indictment match those of two of Tenet’s personalities: Rubin and Pool. As of Wednesday, Rubin’s “The Rubin Report” YouTube channel had 2.44 million subscribers. The indictment refers to “Commentator-1” as having over 2.4 million YouTube subscribers. A person with over 1.3 million YouTube subscribers is referred to as “Commentator-2.” Pool now has 1.37 million subscribers. The indictment also refers to three other commentators, including one with female pronouns, but lacked any information that could directly identify their channels.

[...]

63
64
 
 

A story posted on a mysterious website has been widely circulated on social media after it made a baseless claim that Kamala Harris - the Democratic presidential nominee - was involved in an alleged hit-and-run incident.

It claims, without providing evidence, that a 13-year-old girl was left paralysed by the crash, which it says took place in San Francisco in 2011.

The story, which was published on 2 September by a website purporting to be a media organisation called KBSF-San Francisco News, has been widely shared online. Some online posts by right-leaning users citing the story have been viewed millions of times.

BBC Verify has found numerous false details indicating it is fake and the website has now been taken down.

[...]

Fake news stories targeting the US

The story and the website it originally appeared on share striking similarities with a network of fake news websites that masquerade as US local news outlets, which BBC Verify has previously extensively reported on.

John Mark Dougan, a former Florida police officer who relocated to Moscow is one of the key figures behind the network.

Approached by BBC Verify to comment on the hit-and-run story, Mr Dougan denied any involvement, saying: “Do I ever admit to anything? Of course it’s not one of mine.”

The websites mix dozens of genuine news stories taken from real news outlets with what is essentially the real meat of the operation - totally fabricated stories that often include misinformation about Ukraine or target US audiences.

The websites are often set up shortly before the fake stories appear on them, and then go offline after they serve their purpose.

65
 
 

The head of US Space Command said Wednesday he would like to see more transparency from the Chinese government on space debris, especially as one of China's newer rockets has shown a propensity for breaking apart and littering low-Earth orbit with hundreds of pieces of space junk.

Gen. Stephen Whiting, commander of US Space Command, said he has observed some improvement in the dialogue between US and Chinese military officials this year. But the disintegration of the upper stage from a Long March 6A rocket earlier this month showed China could do more to prevent the creation of space debris and communicate openly about it when it happens.

The Chinese government acknowledged the breakup of the Long March 6A rocket's upper stage in a statement by its Ministry of Foreign Affairs on August 14, more than a week after the rocket's launch August 6 with the first batch of 18 Internet satellites for a megaconstellation of thousands of spacecraft analogous to SpaceX's Starlink network.

Space Command reported it detected more than 300 objects associated with the breakup of the upper stage in orbit, and LeoLabs, a commercial space situational awareness company, said its radars detected at least 700 objects attributed to the Chinese rocket.

"I hope the next time there's a rocket like that, that leaves a lot of debris, that it's not our sensors that are the first to detect that, but we're getting communications to help us understand that, just like we communicate with others," Whiting said at an event hosted by the Mitchell Institute marking the fifth anniversary of the reestablishment of Space Command.

[...]

Last November, [U.S.] President Joe Biden and Chinese President Xi Jinping agreed to resume military-to-military communications between each nation's armed forces, which were suspended in 2022. US and Chinese military leaders have met face to face several times this year, and Jake Sullivan, Biden's national security adviser, met with Xi and Chinese military leaders this week in Beijing. The meetings have focused on terrestrial concerns and operational matters, such as reducing the risk of miscalculations, or an accidental escalation or conflict between Chinese airplanes and ships and those from the United States and its allies.

[...]

China has a track record of leaving behind a lot of space junk. LeoLabs says there are nearly 1,000 abandoned rocket bodies in low-Earth orbit, with an average mass of 1.5 metric tons.

"That number continues to grow, posing a significant risk to the space environment," LeoLabs said in a statement. "While Russia and the US have improved their 'rocket body abandonment behavior' over the last 20 years, the relative contribution by other countries has grown by a factor of five and China by 50x.

"The rate that China is leaving abandoned rocket bodies in orbit reverses the improved behavior of US and Russia and results in a continual accumulation of objects that will be especially prolific in creating fragments if involved in a collision," LeoLabs engineers wrote in a paper last year.

LeoLabs researchers found the total mass of all rocket hardware in low-Earth orbit (LEO) is currently nearly 1,500 metric tons. "Sadly, the rate of rocket body mass abandonment in LEO has actually increased in the last 20 years relative to the first (approximately) 45 years of the space age."

66
 
 

Archived version

When he first emerged on social media, the user known as Harlan claimed to be a New Yorker and an Army veteran who supported Donald Trump for president. Harlan said he was 29, and his profile picture showed a smiling, handsome young man.

A few months later, Harlan underwent a transformation. Now, he claimed to be 31 and from Florida.

New research into Chinese disinformation networks targeting American voters shows Harlan’s claims were as fictitious as his profile picture, which analysts think was created using artificial intelligence.

As voters prepare to cast their ballots this fall, China has been making its own plans, cultivating networks of fake social media users designed to mimic Americans. Whoever or wherever he really is, Harlan is a small part of a larger effort by U.S. adversaries to use social media to influence and upend America’s political debate.

[...]

67
68
69
 
 

cross-posted from: https://feddit.org/post/2527714

Archived link

When men go to pee in a public toilet they spend a minute gazing at the wall in front of them, in what many advertisers have seized upon as an opportunity to put up posters of their products above the stinking urinals.

But in terms of framing, you'd better ask yourself: is this really what I want my brand to be associated with?

You might well think twice if you were selling ice cream or toothpaste, so what if your poster was Ursula von der Leyen's face selling EU values?

Because that's the kind of environment in which the European Commission president, other top EU officials, and national EU leaders are posting their images and comments every day when they use X to communicate with press and the EU public.

Even the toilet analogy is too kind.

There was already lots of toxic crap on X before the summer of 2024.

**Racist, antisemitic, and homophobic content had "surged", according to a January study by US universities. **

**X had more Russian propaganda than any other big social media, an EU report warned in 2023. **

**Porn was 13 percent of X in late 2022, according to internal documents seen by Reuters. **

But this summer, with the failed assassination of Donald Trump in the US and the UK race riots, X's CEO Elon Musk turbocharged his platform into an overflowing sewer of bigotry, nihilism, and greed.

As I tried to follow the UK riots from Brussels using X, time and again, I saw von der Leyen's carefully-coiffed Christian Democrat torso issuing some polite EU statement, while sandwiched on my laptop screen between video-clips getting off on anti-migrant violence, pro-Russian bots, and OnlyFans links.

Musk's algorithms pushed pro-riot content so hard down users' throats it prompted a transatlantic UK government rebuke and talk of legal sanctions.

Tommy Robinson, a leading British racist, got over 430 million views for his X posts, for instance.

Andrew Tate, Britain's top misogynist, got 15 million views for one X post inciting rioters.

And the biggest turd in the cesspit - Musk's own avatar - also kept appearing next to von der Leyen and other EU leaders on my screen, as the US tech baron ranted about "civil war" in the UK, pushed pro-Trump conspiracy theories, or told EU commissioner Thierry Breton to "literally fuck your own face".

Musk's summer coincided with France's arrest of a Russian tech CEO, Pavel Durov, in August on suspicion he condoned the sale of child pornography and drugs on his Telegram platform.

The European Commission also started legal proceedings against X in July over misleading and illegal content, in a process that could see Musk fined hundreds of millions of euros.

But aside from the grand issues of how to regulate social media without stymying free speech or privacy, EU leaders could do something a lot simpler and closer to home for the sake of public mental health - just switch to any other less sleazy platform instead.

You could do it tomorrow with one email to your tech staff and for all the stupid content on Instagram, for example, at least your face won't keep flashing up next to racist glee and naked tits on your constituents' screens.

Von der Leyen has 1.5 million X followers, French president Emmanuel Macron has 9.8 million, while Spanish prime minister Pedro Sánchez and Polish prime minister Donald Tusk have 1.9 million each.

EU leaders could also do something a lot simpler and closer to home for the sake of public mental health - just switch to any other less sleazy platform instead

But please don't worry, not all journalists or the general public are that dumb yet, most of us will find you and follow you because politics is genuinely important.

And we will thank you for giving us one more reason to get off X ourselves, because so long as you use it as your main outlet for news updates you are dragging us along with you.

My initial analogy of advertising in a public toilet was designed to show the importance of semiotics in political PR - it matters where you speak, not just what you say.

The analogy also holds good for those who worry that if normal leaders and media abandoned toxic platforms, then extremism would grow in its own exclusive online world.

It's just good public hygiene to bury our sewage pipes, instead of letting people empty their buckets out of the window onto our heads.

But if you prefer to hold your nose and stay on X, consider also that you are damaging not just your own brand but also causing financial and political harm in real life.

Financial hurt, because if you help make people reliant on X for news, then greater use of Musk's platform makes people like him, Robinson, and Tate ever richer via X's monetisation schemes for viral content.

Political injury, because to the extent that von der Leyen, Macron, or Sánchez possess real importance, they help to aggrandise Musk, Tate, and Robinson by continuously appearing alongside them in X's hyper-curated online space.

And so if you should worry that urinals below your face might put people off, then the situation is actually worse than that.

Your presence on X is also helping to pay for the muck to flow and the toilet owner is using you to sell it to the world.

70
71
72
 
 

Archived link

The original article is behind a paywall at 404media.

In a pitch deck to prospective customers, one of Facebook's alleged marketing partners explained how it listens to users' smartphone microphones and advertises to them accordingly.

As 404 Media reports based on documents leaked to its reporters, the TV and radio news giant Cox Media Group (CMG) claims that its so-called "Active Listening" software uses artificial intelligence (AI) to "capture real-time intent data by listening to our conversations."

"Advertisers can pair this voice-data with behavioral data to target in-market consumers," the deck continues.

In the same slideshow, CMG counted Facebook, Google, and Amazon as clients of its "Active Listening" service. After 404 reached out to Google about its partnership, the tech giant removed the media group from the site for its "Partners Program," which prompted Meta, the owner of Facebook, to admit that it is reviewing CMG to see if it violates any of its terms of service.

An Amazon spokesperson, meanwhile, told 404 that its Ads arm "has never worked with CMG on this program and has no plans to do so. The spox added, confusingly, that if one of its marketing partners violates its rules, the company will take action.

73
 
 

TikTok and other social media companies use AI tools to remove the vast majority of harmful content and to flag other content for review by human moderators, regardless of the number of views they have had. But the AI tools cannot identify everything.

Andrew Kaung says that during the time he worked at TikTok, all videos that were not removed or flagged to human moderators by AI - or reported by other users to moderators - would only then be reviewed again manually if they reached a certain threshold.

He says at one point this was set to 10,000 views or more. He feared this meant some younger users were being exposed to harmful videos. Most major social media companies allow people aged 13 or above to sign up.

TikTok says 99% of content it removes for violating its rules is taken down by AI or human moderators before it reaches 10,000 views. It also says it undertakes proactive investigations on videos with fewer than this number of views.

When he worked at Meta between 2019 and December 2020, Andrew Kaung says there was a different problem. [...] While the majority of videos were removed or flagged to moderators by AI tools, the site relied on users to report other videos once they had already seen them.

He says he raised concerns while at both companies, but was met mainly with inaction because, he says, of fears about the amount of work involved or the cost. He says subsequently some improvements were made at TikTok and Meta, but he says younger users, such as Cai, were left at risk in the meantime.

74
75
 
 
view more: ‹ prev next ›