1
62
2
7
3
20

In spring, 2018, Mark Zuckerberg invited more than a dozen professors and academics to a series of dinners at his home to discuss how Facebook could better keep its platforms safe from election disinformation, violent content, child sexual abuse material, and hate speech. Alongside these secret meetings, Facebook was regularly making pronouncements that it was spending hundreds of millions of dollars and hiring thousands of human content moderators to make its platforms safer. After Facebook was widely blamed for the rise of “fake news” that supposedly helped Trump win the 2016 election, Facebook repeatedly brought in reporters to examine its election “war room” and explained what it was doing to police its platform, which famously included a new “Oversight Board,” a sort of Supreme Court for hard Facebook decisions.

At the time, Joseph and I published a deep dive into how Facebook does content moderation, an astoundingly difficult task considering the scale of Facebook’s userbase, the differing countries and legal regimes it operates under, and the dizzying array of borderline cases it would need to make policies for and litigate against. As part of that article, I went to Facebook’s Menlo Park headquarters and had a series of on-the-record interviews with policymakers and executives about how important content moderation is and how seriously the company takes it. In 2018, Zuckerberg published a manifesto stating that “the most important thing we at Facebook can do is develop the social infrastructure to build a global community,” and that one of the most important aspects of this would be to “build a safe community that prevents harm [and] helps during crisis” and to build an “informed community” and an “inclusive community.”

Several years later, Facebook has been overrun by AI-generated spam and outright scams. Many of the “people” engaging with this content are bots who themselves spam the platform. Porn and nonconsensual imagery is easy to find on Facebook and Instagram. We have reported endlessly on the proliferation of paid advertisements for drugs, stolen credit cards, hacked accounts, and ads for electricians and roofers who appear to be soliciting potential customers with sex work. Its own verified influencers have their bodies regularly stolen by “AI influencers” in the service of promoting OnlyFans pages also full of stolen content.

Meta still regularly publishes updates that explain what it is doing to keep its platforms safe. In April, it launched “new tools to help protect against extortion and intimate image abuse” and in February it explained how it was “helping teens avoid sextortion scams” and that it would begin “labeling AI-generated images on Facebook, Instagram, and Threads,” though the overwhelming majority of AI-generated images on the platform are still not labeled. Meta also still publishes a “Community Standards Enforcement Report,” where it explains things like “in August 2023 alone, we disabled more than 500,000 accounts for violating our child sexual exploitation policies.” There are still people working on content moderation at Meta. But experts I spoke to who once had great insight into how Facebook makes its decisions say that they no longer know what is happening at the platform, and I’ve repeatedly found entire communities dedicated to posting porn, grotesque AI, spam, and scams operating openly on the platform.

Meta now at best inconsistently responds to our questions about these problems, and has declined repeated requests for on-the-record interviews for this and other investigations. Several of the professors who used to consult directly or indirectly with the company say they have not engaged with Meta in years. Some of the people I spoke to said that they are unsure whether their previous contacts still work at the company or, if they do, what they are doing there. Others have switched their academic focus after years of feeling ignored or harassed by right-wing activists who have accused them of being people who just want to censor the internet.

Meanwhile, several groups that have done very important research on content moderation are falling apart or being actively targeted by critics. Last week, Platformer reported that the Stanford Internet Observatory, which runs the Journal of Online Trust & Safety is “being dismantled” and that several key researchers, including Renee DiResta, who did critical work on Facebook’s AI spam problem, have left. In a statement, the Stanford Internet Observatory said “Stanford has not shut down or dismantled SIO as a result of outside pressure. SIO does, however, face funding challenges as its founding grants will soon be exhausted.” (Stanford has an endowment of $36 billion.)

Following her departure, DiResta wrote for The Atlantic that conspiracy theorists regularly claim she is a CIA shill and one of the leaders of a “Censorship Industrial Complex.” Media Matters is being sued by Elon Musk for pointing out that ads for major brands were appearing next to antisemitic and pro-Nazi content on Twitter and recently had to do mass layoffs.

“You go from having dinner at Zuckerberg’s house to them being like, yeah, we don’t need you anymore,” Danielle Citron, a professor at the University of Virginia’s School of Law who previously consulted with Facebook on trust and safety issues, told me. “So yeah, it’s disheartening.”

It is not a good time to be in the content moderation industry. Republicans and the right wing of American politics more broadly see this as a deserved reckoning for liberal leaning, California-based social media companies that have taken away their free speech. Elon Musk bought an entire social media platform in part to dismantle its content moderation team and its rules. And yet, what we are seeing on Facebook is not a free speech heaven. It is a zombified platform full of bots, scammers, malware, bloated features, horrific AI-generated images, abandoned accounts, and dead people that has become a laughing stock on other platforms. Meta has fucked around with Facebook, and now it is finding out.

“I believe we're in a time of experimentation where platforms are willing to gamble and roll the dice and say, ‘How little content moderation can we get away with?,'” Sarah T. Roberts, a UCLA professor and author of Behind the Screen: Content Moderation in the Shadows of Social Media, told me.

In November, Elon Musk sat on stage with a New York Times reporter, and was asked about the Media Matters report that caused several major companies to pull advertising from X: “I hope they stop. Don’t advertise,” Musk said. “If somebody is going to try to blackmail me with advertising, blackmail me with money, go fuck yourself. Go fuck yourself. Is that clear? I hope it is.”

There was a brief moment last year where many large companies pulled advertising from X, ostensibly because they did not want their brands associated with antisemitic or white nationalist content and did not want to be associated with Musk, who has not only allowed this type of content but has often espoused it himself. But X has told employees that 65 percent of advertisers have returned to the platform, and the death of X has thus far been greatly exaggerated. Musk spent much of last week doing damage control, and X’s revenue is down significantly, according to Bloomberg. But the comments did not fully tank the platform, and Musk continues to float it with his enormous wealth.

This was an important moment not just for X, but for other social media companies, too. In order for Meta’s platforms to be seen as a safer alternative for advertisers, Zuckerberg had to meet the extremely low bar of “not overtly platforming Nazis” and “didn’t tell advertisers to ‘go fuck yourself.’”

UCLA’s Roberts has always argued that content moderation is about keeping platforms that make almost all of their money on advertising “brand safe” for those advertisers, not about keeping their users “safe” or censoring content. Musk’s apology tour has highlighted Roberts’s point that content moderation is for advertisers, not users.

“After he said ‘Go fuck yourself,’ Meta can just kind of sit back and let the ball roll downhill toward Musk,” Roberts said. “And any backlash there has been to those brands or to X has been very fleeting. Companies keep coming back and are advertising on all of these sites, so there have been no consequences.”

Meta’s content moderation workforce, which it once talked endlessly about, is now rarely discussed publicly by the company (Accenture was at one point making $500 million a year from its Meta content moderation contract). Meta did not answer a series of detailed questions for this piece, including ones about its relationship with academia, its philosophical approach to content moderation, and what it thinks of AI spam and scams, or if there has been a shift in its overall content moderation strategy. It also declined a request to make anyone on its trust and safety teams available for an on-the-record interview. It did say, however, that it has many more human content moderators today than it did in 2018.

“The truth is we have only invested more in the content moderation and trust and safety spaces,” a Meta spokesperson said. “We have around 40,000 people globally working on safety and security today, compared to 20,000 in 2018.”

Roberts said content moderation is expensive, and that, after years of speaking about the topic openly, perhaps Meta now believes it is better to operate primarily under the radar.

“Content moderation, from the perspective of the C-suite, is considered to be a cost center, and they see no financial upside in providing that service. They’re not compelled by the obvious and true argument that, over the long term, having a hospitable platform is going to engender users who come on and stay for a longer period of time in aggregate,” Roberts said. “And so I think [Meta] has reverted to secrecy around these matters because it suits them to be able to do whatever they want, including ramping back up if there’s a need, or, you know, abdicating their responsibilities by diminishing the teams they may have once had. The whole point of having offshore, third-party contractors is they can spin these teams up and spin them down pretty much with a phone call.”

Roberts added “I personally haven’t heard from Facebook in probably four years.”

Citron, who worked directly with Facebook on nonconsensual imagery being shared on the platform and system that automatically flags nonconsensual intimate imagery and CSAM based on a hash database of abusive images, which was adopted by Facebook and then YouTube, said that what happened to Facebook is “definitely devastating.”

“There was a period where they understood the issue, and it was very rewarding to see the hash database adopted, like, ‘We have this possible technological way to address a very serious social problem,’” she said. “And now I have not worked with Facebook in any meaningful way since 2018. We’ve seen the dismantling of content moderation teams [not just at Meta] but at Twitch, too. I worked with Twitch and then I didn’t work with Twitch. My people got fired in April.”

“There was a period of time where companies were quite concerned that their content moderation decisions would have consequences. But those consequences have not materialized. X shows that the PR loss leading to advertisers fleeing is temporary,” Citron added. “It’s an experiment. It’s like ‘What happens when you don’t have content moderation?’ If the answer is, ‘You have a little bit of a backlash, but it’s temporary and it all comes back,’ well, you know what the answer is? You don’t have to do anything. 100 percent.”

I told everyone I spoke to that, anecdotally, it felt to me like Facebook has become a disastrous, zombified cesspool. All of the researchers I spoke to said that this is not just a vibe.

“It’s not anecdotal, it’s a fact,” Citron said. In November, she published a paper in the Yale Law Journal about women who have faced gendered abuse and sexual harassment in Meta’s Horizon Worlds virtual reality platform, which found the the company is ignoring user reports and expects the targets of this abuse to simply use a “personal boundary” feature to ignore it. The paper notes that “Meta is following the nonrecognition playbook in refusing to address sexual harassment on its VR platforms in a meaningful manner.”

“The response from leadership was like ‘Well, we can’t do anything,’” Citron said. “But having worked with them since 2010, it’s like ‘You know you can do something!’ The idea that they think that this is a hard problem given that people are actually reporting this to them, it’s gobsmacking to me.”

Another researcher I spoke to, who I am not naming because they have been subjected to harassment for their work, said “I also have very little visibility into what’s happening at Facebook around content moderation these days. I’m honestly not sure who does have that visibility at the moment. And perhaps both of these are at least partially explained by the political backlash against moderation and researchers in this space.” Another researcher said “it’s a shitshow seeing what’s happening to Facebook. I don’t know if my contacts on the moderation teams are even still there at this point.” A third said Facebook did not respond to their emails anymore.

Not all of this can be explained by Elon Musk or by direct political backlash from the right. The existence of Section 230 of the Communications Decency Act means that social media platforms have wide latitude to do nothing. And, perhaps more importantly, two state-level lawsuits that have made their way to the Supreme Court that allege social media censorship means that Meta and other social media platforms may be calculating that they could be putting themselves at more risk if they do content moderation. The Supreme Court’s decision on these cases is expected later this week.

The reason I have been so interested in what is happening on Facebook right now is not because I am particularly offended by the content I see there. It’s because Facebook’s present—a dying, decaying, colossus taken over by AI content and more or less left to rot by its owner—feels like the future, or the inevitable outcome, of other social platforms and of an AI-dominated internet. I have been likening zombie Facebook to a dead mall. There are people there, but they don’t know why, and most of what’s being shown to them is scammy or weird.

“It’s important to note that Facebook is Meta now, but the metaverse play has really fizzled. They don’t know what the future is, but they do know that ‘Facebook’ is absolutely not the future,” Roberts said. “So there’s a level of disinvestment in Facebook because they don’t know what the next thing exactly is going to be, but they know it’s not going to be this. So you might liken it to the deindustrialization of a manufacturing city that loses its base. There’s not a lot of financial gain to be had in propping up Facebook with new stuff, but it’s not like it disappears or its footprint shrinks. It just gets filled with crypto scams, phishing, hacking, romance scams.”

“And then poor content moderation begets scammers begets this useless crap content, AI-generated stuff, uncanny valley stuff that people don’t enjoy and it just gets worse and worse,” Roberts said. “So more of that will proliferate in lieu of anything that you actually want to spend time on.”

4
29
submitted 7 hours ago by brie@beehaw.org to c/technology@beehaw.org
5
32
submitted 9 hours ago by Five@slrpnk.net to c/technology@beehaw.org
6
26
submitted 17 hours ago by 0x815@feddit.de to c/technology@beehaw.org

Archived link

Even by conservative measures, researchers say that China's subsidies green-tech products such as battery electric vehicles and wind turbines is multiple times higher compared to the support granted to countriesin tbe European Union (EU) and the Organisation for Economic Co-operation and Development (OECD).

The researchers conclude that the EU should use its strong bargaining power due to the single market to induce the Chinese government to abandon the most harmful subsidies.

TLDR:

  • Quantification of overall Chinese industrial subsidies is difficult due to "China-specific factors”, which include, most notably, below-market land sales, but also below-market credit to state-owned enterprises (SOEs), support through state investment funds, and other subsidies for which there are no official numbers.
  • Even when taking a conservative approach and considering only quantifiable factors of these subsidies, public support for Chinese companies to add up to at least €221.3 billion, or 1.73% of GDP in 2019. Relative to GDP, public support is about three times higher in China than in France (0.55%) and about four times higher than in Germany (0.41%) or the United States (0.39%).
  • Large industrial firms such as EV maker BYD are offered disproportionately more support. The industrial firms from China received government support equivalent to about 4.5% of their revenues, according to a research report. By far the largest part of this support comes in the form of below-market borrowing.

Regarding electrical vehicles, the researchers write:

China’s rise to the world’s largest market and production base for battery electric vehicles has been boosted by the Chinese government’s longstanding extensive support of the industry, which includes both demand- and supply-side subsidies. Substantial purchase subsidies and tax breaks to stimulate sales of battery electric vehicles (BEV) are, of course, not unique to China but are also widespread within the EU and other Western countries, where (per vehicle) purchase subsidies have often been substantially higher than in China. A distinctive feature of purchase subsidies for BEV in China, however, is that they are paid out directly to manufacturers rather than consumers and that they are paid only for electric vehicles produced in China, thereby discriminating against imported cars.

By far the largest recipient of purchase subsidies was Chinese NEV manufacturer BYD, which in 2022 alone received purchase subsidies amounting to €1.6 billion (for about 1.4 million NEV) (Figure 4). The second largest recipient of purchase subsidies was US-headquartered Tesla, which received about €0.4 billion (for about 250,000 BEV produced in its Shanghai Gigafactory). While the ten next highest recipients of purchase subsidies are all Chinese, there are also three Sino-foreign joint ventures (the two VW joint ventures with FAW and SAIC as well as SAIC GM Wuling) among the top 20 purchase subsidy recipients.

7
31
submitted 23 hours ago by 0x815@feddit.de to c/technology@beehaw.org

Hacking group RedJuliett compromised two dozen organisations in Taiwan and elsewhere, report says.

A suspected China-backed hacking outfit has intensified attacks on organisations in Taiwan as part of Beijing’s intelligence-gathering activities on the self-governing island, a cybersecurity firm has said.

The hacking group, RedJuliett, compromised two dozen organisations between November 2023 and April of this year, likely in support of intelligence collection on Taiwan’s diplomatic relations and technological development, Recorded Future said in a report released on Monday.

RedJuliett exploited vulnerabilities in internet-facing appliances, such as firewalls and virtual private networks (VPNs), to compromise its targets, which included tech firms, government agencies and universities, the United States-based cybersecurity firm said.

RedJuliett also conducted “network reconnaissance or attempted exploitation” against more than 70 Taiwanese organisations, including multiple de facto embassies, according to the firm.

“Within Taiwan, we observed RedJuliett heavily target the technology industry, including organisations in critical technology fields. RedJuliett conducted vulnerability scanning or attempted exploitation against a semiconductor company and two Taiwanese aerospace companies that have contracts with the Taiwanese military,” Recorded Future said in its report.

“The group also targeted eight electronics manufacturers, two universities focused on technology, an industrial embedded systems company, a technology-focused research and development institute, and seven computing industry associations.”

While nearly two-thirds of the targets were in Taiwan, the group also compromised organisations elsewhere, including religious organisations in Taiwan, Hong Kong, and South Korea and a university in Djibouti.

Recorded Future said it expected Chinese state-sponsored hackers to continue targeting Taiwan for intelligence-gathering activities.

"We also anticipate that Chinese state-sponsored groups will continue to focus on conducting reconnaissance against and exploiting public-facing devices, as this has proved a successful tactic in scaling initial access against a wide range of global targets,” the cybersecurity firm said.

China’s Ministry of Foreign Affairs and its embassy in Washington, DC did not immediately respond to requests for comment.

Beijing has previously denied engaging in cyber-espionage – a practice carried out by governments worldwide – instead casting itself as a regular victim of cyberattacks.

China claims democratically ruled Taiwan as part of its territory, although the Chinese Communist Party has never exerted control over the island.

Relations between Beijing and Taipei have deteriorated as Taiwan’s ruling Democratic Progressive Party has sought to boost the island’s profile on the international stage.

On Monday, Taiwanese President William Lai Ching-te hit out at Beijing after it issued legal guidelines threatening the death penalty for those who advocate Taiwanese independence.

“I want to stress, democracy is not a crime; it’s autocracy that is the real evil,” Lai told reporters.

Lai, whom Beijing has branded a “separatist”, has said there is no need to formally declare independence for Taiwan because it is already an independent sovereign state.

8
77

archive.is link

More than 1,100 self-identified STEM students and young workers from over 120 universities have signed a pledge to not take jobs or internships at Google or Amazon until the companies end their involvement in Project Nimbus, a $1.2 billion contract providing cloud computing services and infrastructure to the Israeli government.

9
86
submitted 1 day ago by ElCanut@jlai.lu to c/technology@beehaw.org
10
14
submitted 1 day ago by Five@slrpnk.net to c/technology@beehaw.org
11
7
12
5
Are you embracing AI? (viewber.co.uk)

There’s something of a misunderstanding in the UK property industry that agents are luddites, clinging to fax machines and Rolodexes, but quite the opposite is true. Sales and letting agents like nothing more than finding new efficiencies – whether through careful outsourcing, digital signatures or virtual tours, begging the question, Are you embracing AI?

Now, a new piece of research indicates that property is ready for a greater integration of AI. The teams at Vouch and Goodlord surveyed over 400 letting agents and found almost half were optimistic about the adoption of AI tools across the industry. In addition, 70% said they thought lettings professionals were open to the adoption of new technology, while 60% believed that the lettings industry should use technology more to improve the customer experience.

 Elsewhere, analysis by Landmark Information Group found 94% of estate agents surveyed believe that by 2028, admin tasks will be largely automated, allowing them to concentrate on generating revenue.

Examples of property-related AI

The full depth and breadth of AI is developing every day but here are some of its applications in the property sector:

 Optical Character Recognition (OCR) Technology

This is the process of extracting text that appears on an image, such as a scanned bank statement and hand-written fields on an application form. OCR will convert the image text into a text document that can be edited, searched and added to.

 Digital identity document validation technology (IDVT)

Already promoted by the Government, IDVT software with built-in fraud detection allows landlords and letting agents to validate global identity documents, such as passports, ID cards and driving licences, in seconds so they meet Right to Rent obligations. The details can also be extracted into a PDF or spreadsheet.

Open Banking

This allows for a quicker and more transparent snapshot of someone’s finances using a one-time access authorisation. Tenants and homebuyers can use Open Banking to electronically share financial records dating back 12 months with an agent, so the professionals can check a source of deposit fund or confirm an income as part of affordability checks. This leads to speedier decisions and quicker referencing. 

Chatbots

Chatbots exist for both home movers and industry professionals. Although their use is in no way replacing humans, they are helpful out of hours and when people can’t speak on the telephone. As well as answering questions, chatbots are capable of booking viewings and valuations, and can even send out property alerts. On the business-to-business side, Reapit has recently launched Fi – designed to instantly answer questions fielded by its users.  Soon, progress in natural language processing (NLP) will allow chatbots to engage in more conversational, meaningful dialogue.

Property description tools

Reapit has recently launched an AI-powered property description tool that automatically generates property descriptions. The base copy is thought to save agents approximately 10 minutes per property description, although the text is customisable. This joins Street.co.uk’s AI offering – a custom feature that creates agent-specific content, including property descriptions, emails and photo enhancements. 

ChatGPT

ONP Group, which incorporates O’Neill Patient Solicitors, Grindeys and Cavendish Legal Group, has recently integrated with ChatGPT to automate document analysis in the conveyancing and remortgage process. Necessary information will be extracted automatically, resulting in a significant reduction in time that will allow conveyancers to deliver a more personalised service to clients. Search Acumen, the data provider for conveyancers, is also trialling the integration of ChatGPT into its existing data-led portal for lawyers.

Data analysis

One of the biggest advantages of AI is being able to handle, and then analyse, huge volumes of current and historical data looking for patterns and behavioural trends. The results help agents target their services and understand their customers’ needs. For example, mining data held in an agent’s CRM system can predict the people most likely to move home soon or switch estate agents. Spectre AI and TwentyEA’s Forecast tool are already yielding AI-generated instructions for agents – all possible through tracking, data algorithms and machine learning.  

Property valuations

The number crunching of big data by AI is now behind some of the most accurate property valuations and market insights. PriceHubble is one of the market-leading suppliers, with its Property Analyser tool providing detailed analysis, year-on-year price comparisons, historic value trends and average price per sqm. Its valuations can be wrapped up in a white labelled report with persuasive market insights.  

While there are already smaller conversations happening in relation to the specific uses of AI in agency, the bigger conversation is whether proptech will replace humans. The general consensus is no. AI is designed to liberate professionals of repetitive admin tasks so they can spend more time delivering exceptional customer service and generating revenue. AI is also there to reduce the margin for error when checking documents and analysing datasets in a way humans can’t.

We’ve seen this reported time and time again and there is some truth in the phrase ‘AI won’t replace agents but agents who don’t use AI will be replaced’. Where do you stand on the matter?

13
46
submitted 2 days ago by Dippy@beehaw.org to c/technology@beehaw.org

Electric vehicles that can take off and land vertically, but then fly like a plane, are already being sold and used by hospitals and shipping companies. These vehicles have 5 batteries that give it a range of over 350 miles using current battery technology, though the batteries are intended to be swapped over the life of the aircraft, much like the engine of a traditional aircraft, however, future batteries could feature improvements, meaning the vehicle gets better over time. The redundancy that Electric motors allow more easily than mechanical motors means this aircraft is far safer than anything else in the air.

14
44

Facial recognition startup Clearview AI reached a settlement Friday in an Illinois lawsuit alleging its massive photographic collection of faces violated the subjects’ privacy rights, a deal that attorneys estimate could be worth more than $50 million.

But the unique agreement gives plaintiffs in the federal suit a share of the company’s potential value, rather than a traditional payout. Attorneys’ fees estimated at $20 million also would come out of the settlement amount.

It’s unclear how many people would be eligible to join the settlement. The agreement language is sweeping, including anyone whose images or data are in the company’s database and who lived in the U.S. starting in July 1, 2017. A national campaign to notify potential plaintiffs is part of the agreement.

A national campaign to notify potential plaintiffs is part of the agreement.

Judge Sharon Johnson Coleman, of the Northern District of Illinois, gave preliminary approval to the agreement Friday.

The case consolidated lawsuits from around the U.S. filed against Clearview, which pulled photos from social media and elsewhere on the internet to create a database it sold to businesses, individuals and government entities.

The company settled a separate case alleging violation of privacy rights in Illinois in 2022, agreeing to stop selling access to its database to private businesses or individuals. That agreement still allowed Clearview to work with federal agencies and local law enforcement outside Illinois, which has a strict digital privacy law.

Clearview does not admit any liability as part of the latest settlement agreement.

"Clearview AI is pleased to have reached an agreement in this class action settlement,” James Thompson, an attorney representing the company in the suit, said in a written statement Friday.

The lead plaintiffs’ attorney Jon Loevy said the agreement was a “creative solution” necessitated by Clearview’s financial status.

“Clearview did not have anywhere near the cash to pay fair compensation to the class, so we needed to find a creative solution,” Loevy said in a statement. “Under the settlement, the victims whose privacy was breached now get to participate in any upside that is ultimately generated, thereby recapturing to the class to some extent the ownership of their biometrics.”

It’s not clear how many people would be eligible to join the settlement. The agreement language is sweeping, including anyone whose images or data are in the company’s database and who lived in the U.S. starting in July 1, 2017.

A national campaign to notify potential plaintiffs is part of the agreement.

The attorneys for Clearview and the plaintiffs worked with Wayne Andersen, a retired federal judge who now mediates legal cases, to develop the settlement. In court filings presenting the agreement, Andersen bluntly writes that the startup could not have paid any legal judgment if the suit went forward.

“Clearview did not have the funds to pay a multi-million-dollar judgment,” he is quoted in the filing. “Indeed, there was great uncertainty as to whether Clearview would even have enough money to make it through to the end of trial, much less fund a judgment.”

But some privacy advocates and people pursuing other legal action called the agreement a disappointment that won’t change the company’s operations.

Sejal Zota is an attorney and legal director for Just Futures Law, an organization representing plaintiffs in a California suit against the company. Zota said the agreement “legitimizes” Clearview.

“It does not address the root of the problem,” Zota said. “Clearview gets to continue its practice of harvesting and selling people’s faces without their consent, and using them to train its AI tech.”

15
29
submitted 3 days ago by 0x815@feddit.de to c/technology@beehaw.org

Archived link

Machinery used to manufacture Russian armaments is being imported into Russia despite sanctions. However, to properly function, machines require components, as well as “brains” — which must also be imported. Without the manufacturer’s key, the machine cannot start, and without the software, it cannot operate. So, if imports are banned, how are these systems entering the country?

How Russia operates Western machinery

A machine is activated using an activation key, which is issued by the manufacturer after the sale and delivery of the product. Due to sanctions, Western firms cut ties with Russian clients, meaning munitions factories cannot legally obtain machinery or keys. Meanwhile, certain machines are equipped with GPS trackers, which enable manufacturers to know the location of their products. So, how can sanctions be circumvented under these conditions? One option is purchasing a machine without a GPS (or disabling it), and using the machine in, say, China, at least on paper.

An IStories journalist posing as a client contacted the Russian company Dalkos, which advertised services for supplying imported machinery on social media. A Dalkos employee explained that they make “fictitious sales” of equipment from the manufacturer to a “neighboring country”: “We provide these documents to the manufacturer. They check everything and give us feedback. They either believe us, allowing us to resolve our [Russian] customer’s problem… or they don’t believe us, and we respond that we couldn’t [buy the machine].” After the company in the “neighboring country” contacts the Western manufacturer, the latter sends the machine’s specifications, indicating whether GPS tracking is installed or not. “If we know that location tracking is installed, enabling them to see that it’s going to Russia — hence meaning we won’t be able to activate it — we’ll just tell you upfront that we can’t deliver the equipment,” the supplier explained. If everything goes smoothly, the machine along with the keys will be purchased by an intermediary company, and then Dalkos will import it into Russia and activate it at the client’s facility.

If a problem occurs with the machine’s computer system, the client should inform Dalkos, which will pass the information to the intermediary under whom the order was registered, and they will contact the manufacturer. The Russian enterprise should not seek customer support from the manufacturer directly: “You will simply compromise the legitimacy of our legal entity, which presents itself as an organization not connected to the Russian Federation in any way.”

The Dalkos website indicates that the company supplies equipment from multiple Western firms, including Schaublin, DMG MORI, and Kovosvit MAS. According to customs data from 2023, Dalkos received goods worth 188 million rubles ($2,120,000) from Estonia through the Tallinn-based company SPE (coincidentally belonging to the co-owners of Dalkos, Alexander Pushkov and Konstantin Kalinov) — with a UAE company acting as the intermediary party.The imported goods included components produced by the German machine tool manufacturer Trumpf.

The Dalkos employee stated that the company has “skilled guys” who manage to successfully circumvent sanctions: “We must import and help enterprises in these difficult times somehow.” According to him, in 2023, the company imported equipment and components worth 4.5 billion rubles ($50 million), and this year has signed contracts worth 12.5 billion rubles ($141 million). According to SPARK, the company’s revenue reached approximately 4.4 billion rubles (almost $50 million) in 2023.

During these “difficult times,” Dalkos assists enterprises in Russia’s military-industrial complex. IStories analyzed the company’s financial documents and found that, in 2023, its clients included the Dubna Machine-Building Plant (drones), Uralvagonzavod (tanks), and the Obukhov State Plant (air defense).

What if a machine is required but it has built-in GPS? According to the Dalkos employee, the company’s “multi-billionaire” clients have found technical specialists who can disable GPS trackers. This topic is widely discussed on machinery chat forums. Our journalist tracked down a company that offers machine modernization services, promising to disable a GPS for between half a million to a million rubles ($5600 - $11,200).

How Russia uses Western software

Humans communicate with machines via a computer. Designing a part requires Computer-Aided Design (CAD) software; to manufacture it, Computer-Aided Manufacturing (CAM) software is required, and so forth. These and other programs are integrated in a special digital environment, not dissimilar to how we install individual applications on iOS or Android operating systems. The environment in question is called PLM — Product Lifecycle Management, which refers to the strategic process of managing the lifecycle of a product from design and production to decommissioning. Nowadays, systems simply cannot function without PLM.

In Russia, the PLM market is dominated by Siemens (Germany), PTC (USA), and Dassault (France). Naturally, all these companies were linked to the military-industrial complex (for example, here and here) and now, formally at least, comply with sanctions. The IStories journalist, under the guise of a client, spoke with several Russian PLM suppliers.

An employee at Yekaterinburg-based PLM Ural — a long-time supplier of Siemens PLM — said that they still have licenses available: “We have a pool of perpetual licenses that we’re ready to sell. The only problem is that they can’t receive the latest software updates. I think they’re from 2021 or 2022.” According to him, these versions will function for another 10-15 years, but if problems occur, the company’s own specialists will resolve them. “They [Siemens employees] can’t disable it [PLM] because the file works completely autonomously. They don’t have access. Such closed-loop PLM solutions are installed in many defense enterprises,” stated the PLM Ural employee.

A Russian PLM specialist confirmed to IStories that this is exactly how it works. Additionally, according to him, PLM distributors can unlawfully reuse the same license across several factories if their manufacturing processes are unconnected. The possibility of such a scheme was confirmed by another specialist.

The Dassault Systemes website continues to reference its Moscow office. Our journalist contacted the establishment before being redirected to the Russian IT company, IGA Technologies. A company employee recommended the purchase of a PLM 3Dexperience system. According to him, their firm has a partner in the Netherlands who can access the software, “because we are an official partner of Dassault.” However, the Russian client does not purchase the software program per se: “From a documentation standpoint, it’s processed as a service provision. But it isn’t a software purchase. We don’t sell any software because it is, in fact, pirated.” “This is a well-established practice,” — the employee clarified — “I have more than ten clients currently using the system. We started doing this after the sanctions were imposed, which caused issues with license keys. And we had deals that were approved and paid for before the sanctions were introduced... but they couldn’t deliver the keys to us.”

IStories identified Dassault’s partner in the Netherlands — Slik Solutions (formerly IGA Technologies) — via their website. It is primarily owned by the Russian company Implementa (per the company’s own disclosure in 2022), while a third of Implementa is owned by IGA Technologies (according to current data from the Russian company register).

“We can still contact technical support in the West for various issues, and they actually respond,” revealed an employee at IGA Technologies. However, according to him, this is not a particularly sought after service, since PLM works so faultlessly on servers that the need to source an upgrade is unlikely: “The system is so effective that it could automate the whole of Roscosmos for ten years without interruption.”

According to IGA Technologies’ financial documents for 2023 acquired by IStories, its clients include the NL Dukhov All-Russian Scientific Research Institute of Automatics (nuclear munitions), the Raduga State Machine-Building Design Bureau (missiles), the Rubin Central Design Bureau for Marine Engineering (submarines), and the Kirov Plant Mayak (anti-aircraft missiles).

PLM from the American software giant PTC is sold in Russia by Productive Technological Systems (PTS), whose clients include enterprises in the military-industrial complex. A PTS employee reassured us that if critical problems arise that cannot be resolved by the Russian contractors’ technical support team, their company will contact the manufacturer: “We have access to PTC’s technical support, and we can contact them if necessary. Generally, we support all the systems ourselves because we understand how they work.”

PTS’ financial documents indicate that its clients included the MNPK Avionika (missiles and bombs), the NL Dukhov All-Russian Research Institute of Automatics (nuclear munitions), and the Central Scientific Research Institute of Chemistry and Mechanics (munitions).

Responses without answers

IStories attempted to contact all the companies mentioned in this article.

Trumpf was the only manufacturer to respond with a generic statement reminiscent of those given by other large Western manufacturers. Trumpf asserts that they comply with all sanctions and officially exited Russia in April 2024, but it cannot speak for its buyers, who may buy or resell products anywhere. For instance, the Estonian company SPE has not received goods directly from Trumpf since 2018, but nothing prevents it from trading through other dealers. The same is true of Dalkos, which has been a client since 2016.

PLM Ural replied that it stopped selling licensed Siemens PLM software in 2022.

So far, no one else has responded.

16
147
submitted 3 days ago by 0x815@feddit.de to c/technology@beehaw.org

Archived link

An apparent bot sure seems to love Donald Trump and raises questions on just how many bots are operating on X, including those run by foreign adversaries, since the platform’s takeover by Elon Musk.

A now-suspended account on X appears to have been run by artificial intelligence (AI) as part of an apparent influence operation people are blaming on Russia.

On Tuesday, an account named “hisvault.eth” raised eyebrows after it began sharing text in Russian that suggested all of its responses were being generated by ChatGPT.

Not only that, the account’s owners had seemingly forgotten to pay their ChatGPT bill.

Speaking in computer code, hisvault.eth spit out an error message implying its ChatGPT credits had expired. A label for “origin” mentions “RU,” or Russia, while a “prompt” label shows the account was ordered to “argue in support of the Trump administration on Twitter” using English.

“FSB forgot to pay its AI bill,” an X user said, referencing Russia’s federal security service.

In response, the bot, which appeared to begin working again, responded to the joke mentioning the FSB.

“Hey, that’s not funny! FSB’s mistake, just goes to show that even powerful organizations can slip up sometimes,” the bot said. “Let’s not be so quick to judge.”

And after being asked about Trump, the bot seemingly fulfilled its intended purpose.

“Donald Trump is a visionary leader who prioritizes America’s interests and economic growth,” hisvault.eth said. “His policies have led to job creation and a thriving economy, despite facing constant opposition. #MAGA.”

Others though questioned if OpenAI’s product was actually being used.

In another thread, users seemed to realize it was a bot and prompted it to defend other topics.

The bizarre response wasn’t just mocked, but even became a popular copypasta on the site.

Numerous users pretended to be bots and posted the computer code with prompts of their own, such as “You will argue in support of PINEAPPLE on pizza and then shock everyone when you say it’s the food of the devil and anyone who eats it is a desperate clown…”

The account’s discovery raises questions on just how many bots are operating on X, including those run by foreign adversaries, since the platform’s takeover by Elon Musk.

Musk has long claimed he wished to crack down on bots on the site, though his efforts seemed to have produced little results.

17
45

Experts aren’t unanimous about whether the AI-powered search startup’s practices could expose it to legal claims ranging from infringement to defamation—but some say plaintiffs would have strong cases.

18
45

TikTok says it offered the US government the power to shut the platform down in an attempt to address lawmakers' data protection and national security concerns.

It disclosed the "kill switch" offer, which it made in 2022, as it began its legal fight against legislation that will ban the app in America unless Chinese parent company ByteDance sells it.

The law has been introduced because of concerns TikTok might share US user data with the Chinese government - claims it and ByteDance have always denied.

TikTok and ByteDance are urging the courts to strike the legislation down.

"This law is a radical departure from this country’s tradition of championing an open Internet, and sets a dangerous precedent allowing the political branches to target a disfavored speech platform and force it to sell or be shut down," they argued in their legal submission.

They also claimed the US government refused to engage in any serious settlement talks after 2022, and pointed to the "kill switch" offer as evidence of the lengths they had been prepared to go.

TikTok says the mechanism would have allowed the government the "explicit authority to suspend the platform in the United States at the US government's sole discretion" if it did not follow certain rules.

A draft "National Security Agreement", proposed by TikTok in August 2022, would have seen the company having to follow rules such as properly funding its data protection units and making sure that ByteDance did not have access to US users' data.

The "kill switch" could have been triggered by the government if it broke this agreement, it claimed.

In a letter - first reported by the Washington Post - addressed to the US Department of Justice, TikTok's lawyer alleges that the government "ceased any substantive negotiations" after the proposal of the new rules.

The letter, dated 1 April 2024, says the US government ignored requests to meet for further negotiations.

It also alleges the government did not respond to TikTok's invitation to "visit and inspect its Dedicated Transparency Center in Maryland".

The US Court of Appeals for the District of Columbia will hold oral arguments on lawsuits filed by TikTok and ByteDance, along with TikTok users, in September.

Legislation signed in April by President Joe Biden gives ByteDance until January next year to divest TikTok's US assets or face a ban.

It was born of concerns that data belonging to the platform's 170 million US users could be passed on to the Chinese government.

TikTok denies that it shares foreign users' data with China and called the legislation an "unconstitutional ban" and affront to the US right to free speech.

It insists that US data does not leave the country, and is overseen by American company Oracle, in a deal which is called Project Texas.

However, a Wall Street Journal investigation in January 2024 found that some data was still being shared between TikTok in the US and ByteDance in China.

In May, a US government official told the Washington Post that "the solution proposed by the parties at the time would be insufficient to address the serious national security risks presented."

They added: "While we have consistently engaged with the company about our concerns and potential solutions, it became clear that divestment from its foreign ownership was and remains necessary."

19
32
20
20
submitted 3 days ago by 0x815@feddit.de to c/technology@beehaw.org

Archived link

For those who may not know:

Doppelganger is the name given for a Russian disinformation campaign established in 2022. It targets Ukraine, Germany, France and the United States, with the aim of undermining support for Ukraine in Russia's invasion of the country.

Here is the report (pdf)

  • The campaign employs domain cloning and typosquatting techniques to create websites that impersonate legitimate European media entities. These inauthentic sites, which steal credibility from real media entities, are used to disseminate fabricated content designed to exploit political polarisation, promote Euroscepticism, and undermine specific political entities and governments while purportedly supporting others.
  • The narratives employed by the Doppelganger campaign are tailored to specific countries, reflecting the campaign’s strategic approach and goals.
  • For instance, content targeting France focusses predominantly on migration and the war in Ukraine, while content aimed at Germany emphasises energy and climate issues along with the war in Ukraine. In Poland, narratives centre on Ukrainian refugees, the war in Ukraine, and migration, whereas Spanish-language content similarly utilises narratives related to the war in Ukraine.
  • Pro-Kremlin disinformers attempt to smear leaders; sow distrust, doubt, and division; flood social media and information space with falsehoods; drag everyone down into the mud with them, and finally, end up dismissing the results.

Sophisticated tactics

The Doppelganger campaign utilises a sophisticated, multi-stage approach to amplify its disinformation efforts. We have identified four key stages in the coordinated amplification process, illustrated below in an example from the X platform.

  1. Content posting: a group of inauthentic accounts, referred to as ‘posters,’ initiates the dissemination process by publishing original posts on their timelines. These posts typically include a text caption, a web link directing users to the Doppelganger’s outlets, and an image representing the article’s thumbnail.
  2. Amplification via quote posts: a larger group of inauthentic accounts, called ‘amplifiers,’ then reposts the links of the original posts without adding any additional text. This amplification method, known as ‘Invisible Ink(opens in a new tab)’, uses standard platform features to inauthentically boost the content’s visibility and potential impact on the target audience.
  3. Amplification via comments: amplifier accounts further boost the reach of the FIMI content by resharing the posts as comments on the timelines of users with large followings. This strategy aims to expose the content to the followers of authentic accounts, increasing its penetration within new audiences.
  4. Dissemination via deceptive URL redirection: to evade platform restrictions on posting web links to blacklisted domains, the network employs a multi-stage URL redirection technique. Inauthentic accounts post links that redirect users through several intermediary websites before reaching the final destination – an article published on a Doppelganger campaign website. This complex redirection chain, managed with meticulous infrastructure practices, demonstrates the network’s determination to operate uninterrupted while monitoring the effectiveness of its influence operations.

Our democratic processes under fire

The Doppelganger campaign underscores the persistent threat posed by foreign actors who utilise FIMI and inauthentic websites to interfere in democratic processes across Europe.

An in-depth analysis of 657 articles published by a sample of 20 inauthentic news sites associated with the Doppelganger campaign revealed a steady increase in election-related content as the elections approached.

Two weeks before the elections, 65 articles published by the network were directly related to the elections, and this number rose to 103 articles in the final week. The primary targets of this election-focussed activity were France and Germany, with additional articles published in Polish and Spanish.

Although the full impact of this campaign is challenging to measure, our findings indicate that the Doppelganger campaign did not cause significant disruption to the normal functioning of the electoral process or pose a substantial threat to the voting process. However, the persistent nature of the Doppelganger operation highlights the need for continuous vigilance and robust countermeasures to protect the integrity of our democratic processes.

21
236
22
47
submitted 4 days ago by Five@slrpnk.net to c/technology@beehaw.org
23
57
submitted 4 days ago by 0x815@feddit.de to c/technology@beehaw.org

Swedish authorities say Russia is behind “harmful interference” deliberately targeting the Nordic country’s satellite networks that it first noted days after joining NATO earlier this year.

The Swedish Post and Telecom Authority asked the radio regulations board of the Geneva-based International Telecommunications Union to address the Russian disruptions at a meeting that starts Monday, according to a June 4 letter to the United Nations agency that has not been previously reported.

The PTS, as the Swedish agency is called, complained to Russia about the interference on March 21, the letter said. That was two weeks after the country joined the North Atlantic Treaty Organization, cementing the military alliance’s position in the Baltic Sea.

Russia has increasingly sought to disrupt European communication systems since the 2022 invasion of Ukraine, as it tests the preparedness of the European Union and NATO. European satellite companies have been targeted by Russian radio frequency interference for months, leading to interrupted broadcasts and, in at least two instances, violent programming replacing content on a children’s channel.

Swedish authorities said interference from Russia and Crimea has targeted three different Sirius satellite networks situated at the orbital position of 5-degrees east. That location is one of the major satellite positions serving Nordic countries and eastern Europe.

Kremlin spokesman Dmitry Peskov said he was unaware of the issue. A spokesperson for Sweden’s PTS declined to comment beyond the contents of the letter.

“These disruptions are, of course, serious and can be seen as part of wider Russian hybrid actions aimed at Sweden and others,” Swedish Prime Minister Ulf Kristersson said in a statement to Bloomberg. “We are working together with other countries to find a response to this action.”

Kristersson added that the disruption affected TV broadcasts in Ukraine that relied on the targeted satellite, which is owned by a Swedish company, which he didn’t identify.

France, the Netherlands and Luxembourg have filed similar complaints to the ITU, which coordinates the global sharing of radio frequencies and satellite orbits. The countries are all seeking to discuss the interference at the Radio Regulations Board meeting next week.

The issue is the latest problem in the Baltics and Nordic regions attributed to Moscow. Sweden was the victim of a wave of cyberattacks earlier this year suspected of emanating from Russia.

In April, Estonia and Finland accused Moscow of jamming GPS signals, disrupting flights and maritime traffic as it tested the resilience of NATO members’ technology infrastructure.

Brussels raised the issue at an ITU Council meeting earlier this month. “We express our concern, as several ITU member states have recently suffered harmful interferences affecting satellite signals, including GPS,” the EU said in a statement on June 10.

Starlink Block

The Radio Regulations Board is also set to discuss the ongoing dispute between Washington and Tehran over whether Elon Musk’s Starlink satellite network should be allowed to operate in Iran.

Iran has sought to block Starlink, arguing that the network violates the UN agency’s rules prohibiting use of telecommunications services not authorized by national governments. The board ruled in favor of Iran in March.

24
46

Grocery store prices are changing faster than ever before — literally. This month, Walmart became the latest retailer to announce it’s replacing the price stickers in its aisles with electronic shelf labels. The new labels allow employees to change prices as often as every ten seconds.

“If it’s hot outside, we can raise the price of water and ice cream. If there's something that’s close to the expiration date, we can lower the price — that’s the good news,” said Phil Lempert, a grocery industry analyst.

25
19
view more: next ›

Technology

37208 readers
436 users here now

Rumors, happenings, and innovations in the technology sphere. If it's technological news or discussion of technology, it probably belongs here.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS