[-] Fauxreigner@beehaw.org 8 points 3 months ago

Unless you're willing to put in some kind of response that basically says "I'm not going to respond to that" (and that's a sure way to break immersion) this is effectively impossible to do well, because the writer has to anticipate every possible thing a player could say and craft a response to it. If you don't, you'll end up finding a "nearest fit" that is not at all what the player was trying to say, and the reaction is going to be nonsensical from the player's perspective

LA Noire is a great example of this, although from the side of the player character: the dialogue was written with the "Doubt" option as "Press" (as in, put pressure on the other party). As a result, a suspect can say something, the player selects "Doubt", and Phelps goes nuts making wild accusations instead of pointing out an inconsistency.

Except worse, because in this case, the player says something like "Why didn't you say something to your boss about feeling sick?" and the game interpreted it as "Accuse them of trying to sabotage the business."

[-] Fauxreigner@beehaw.org 9 points 5 months ago

I think there’s massive untapped demand for things like mini city cars and kei trucks.

Not just that, but even the more middle ground small cars. I'd love to have an EV truck sized the way they were in the 80's/90's (which was more or less comparable to a midsize sedan, just taller). The push to bigger and bigger wheelbases to take advantage of loopholes in the efficiency standards really doesn't need to be reflected in EVs, but it's what all the major automakers are doing.

[-] Fauxreigner@beehaw.org 4 points 7 months ago

More to the point, there are some perfectly suitable rules that every other federal judge is bound to, we don't need a new set of rules at all.

[-] Fauxreigner@beehaw.org 13 points 7 months ago

From the opening page

The Court has long had the equivalent of common law ethics rules, that is, a body of rules derived from a variety of sources, including statutory provisions, the code that applies to other members of the federal judiciary, ethics advisory opinions issued by the Judicial Conference Committee on Codes of Conduct, and historic practice. The absence of a Code, however, has led in recent years to the misunderstanding that the Justices of this Court, unlike all other jurists in this country, regard themselves as unrestricted by any ethics rules. To dispel this misunderstanding, we are issuing this Code, which largely represents a codification of principles that we have long regarded as governing our conduct.

So...

  1. Why, if you think the code that applies to all other federal judges is good, did you not simply adopt it?
  2. So the problem is that people think the justices consider them not bound by ethics rules because they don't have a formal code, not the behaviors of certain justices that have come to light in recent years, got it.
[-] Fauxreigner@beehaw.org 26 points 9 months ago

Nah, these accusations of racism from a company owned by an Apartheid era South African emerald mine heir are too racist.

[-] Fauxreigner@beehaw.org 8 points 11 months ago

Current-gen AI isn’t just viewing art, it’s storing a digital copy of it on a hard drive.

This is factually untrue. For example, Stable Diffusion models are in the range of 2GB to 8GB, trained on a set of 5.85 billion images. If it was storing the images, that would allow approximately 1 byte for each image, and there are only 256 possibilities for a single byte. Images are downloaded as part of training the model, but they're eventually "destroyed"; the model doesn't contain them at all, and it doesn't need to refer back to them to generate new images.

It's absolutely true that the training process requires downloading and storing images, but the product of training is a model that doesn't contain any of the original images.

None of that is to say that there is absolutely no valid copyright claim, but it seems like either option is pretty bad, long term. AI generated content is going to put a lot of people out of work and result in a lot of money for a few rich people, based off of the work of others who aren't getting a cut. That's bad.

But the converse, where we say that copyright is maintained even if a work is only stored as weights in a neural network is also pretty bad; you're going to have a very hard time defining that in such a way that it doesn't cover the way humans store information and integrate it to create new art. That's also bad. I'm pretty sure that nobody who creates art wants to have to pay Disney a cut because one time you looked at some images they own.

The best you're likely to do in that situation is say it's ok if a human does it, but not a computer. But that still hits a lot of stumbling blocks around definitions, especially where computers are used to create art constantly. And if we ever hit the point where digital consciousness is possible, that adds a whole host of civil rights issues.

[-] Fauxreigner@beehaw.org 4 points 11 months ago

The only other solution is that the richest person in the world (officially) is this stupid. This is almost harder to believe than a conspiracy to destroy twitter.

Why is that hard to believe? The mega-rich are not notably more intelligent than anyone else, they just started decades ago with inherited wealth and got lucky early.

[-] Fauxreigner@beehaw.org 6 points 11 months ago* (last edited 11 months ago)

Beyond that, it'll try to summarize a book, but it often can't do so successfully, although it will act like it has. Give it a try on something that is even a little bit obscure and it can't really give you good information. I tried with Blindsight, which is not something that's in the popular culture, but also a Hugo nominee, so not completely obscure. It knew who the characters were, and had a general sense of the tone, but it completely fabricated every major plot point that I asked about. Did the same with A Head Full of Ghosts, which is more well known but still not something everyone has read, and it did the same thing.

One thing I found that's really fun is to ask it a question and then follow up with something like "Are you sure about that?" It'll almost always correct itself and make up something else. It'll go one step further and incorporate details you ask about. Give it a prompt like "Are you sure this character died of natural causes? I thought they were killed by Bob" and it will very frequently say you're right and make up a story along those lines that's plausible within the text. It doesn't work on really popular stuff; you can't convince it that Optimus Prime saves Luke Skywalker in RotJ, but anything even a little less well known and it'll tell you details that it's making up whole cloth with complete confidence.

[-] Fauxreigner@beehaw.org 4 points 1 year ago

Where the lategame usually turns into a full screen disco of weapon effects, I'm really not sure how multiplayer will work.

[-] Fauxreigner@beehaw.org 3 points 1 year ago

No worries! We're making a lot of assumptions here either way.

[-] Fauxreigner@beehaw.org 9 points 1 year ago

If it was actually them, I'd guess they were banging on the titanium end cap.

[-] Fauxreigner@beehaw.org 16 points 1 year ago* (last edited 1 year ago)

There are reports that acoustic systems picked up banging noises at 30 minute intervals. Until I heard that, I was convinced it had imploded. Now I'm not so sure, and it'll only be worse if they aren't rescued. Implosion would at least have been fast.

view more: next ›

Fauxreigner

joined 1 year ago