• 2 Posts
  • 111 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle

  • I have no idea how accurate this info on FindLaw.com is, but according to it, you don’t need a lawyer in small claims court (in the US). And according to https://en.wikipedia.org/wiki/Small_claims_court there are many other countries with similar small claim courts: “Australia, Brazil, Canada, England and Wales, Hong Kong, Ireland, Israel, Greece, New Zealand, Philippines, Scotland, Singapore, South Africa, Nigeria and the United States”. I know the list of countries is not even close to covering a large amount of Steam users, but I suspect that us Europeans are covered in other ways, so there’s that.

    The Wikipedia page also mentions the lawyer thing, by the way:

    A usual guiding principle in these courts is that individuals ought to be able to conduct their own cases and represent themselves without a lawyer. Rules are relaxed but still apply to some degree. In some jurisdictions, corporations must still be represented by a lawyer in small-claims court.

    And I don’t think you need to sue Valve in the US. I think they’re required to have legal representation in the countries in which they operate, which should enable you to sue them “locally” in many cases. Again, not an expert, so I’m making quite a few assumptions here.



  • If, for example, I want to return a game in accordance with the rules and they won’t let me, I’m not gonna lawyer up and sue them from the other side of the Atlantic.

    While supposedly being a lot cheaper than litigation, arbitration isn’t free either. Besides, arbitration makes it near-impossible to appeal a decision, and the outcome won’t set binding legal precedent. Furthermore, arbitration often comes with a class action waiver. Valve also removed that from the SSA.

    I’m far from an expert in law, especially US law, but as I understand it, arbitration is still available (if both parties agree, I assume), it’s just not a requirement anymore [edit: nevermind, I didn’t understand it]. I’m sure they’re making this move because it somehow benefits them, but it still seems to me that consumers are getting more options [edit: they’re not] which is usually a good thing.



  • Skrev en mindre rant om det indlæg her: https://infosec.exchange/@madsen/113086673971699662


    Synes godt nok, at Rune Lykkeberg (chefredaktør på Information) og Henrik Gottlieb (sprog- og oversættelsesforsker ved KU) reducerer det danske sprog unødigt til kun leksikografi her: https://www.dr.dk/nyheder/indland/drukner-det-danske-ord-i-engelske-udtryk

    Ud over mindre nuancer i betydning (“team” vs “hold” i arbejdssammenhæng), er der jo også stilistiske valg og sociale og/eller geografiske tilhørsforhold på spil, når folk vælger et ord fremfor et nogenlunde tilsvarende andet.

    Sociolingvistik er jo sit helt eget felt indenfor sprogforskning — det kan jeg ikke forestille mig, at Henrik Gottlieb er ubekendt med — så det er sgu lidt tyndt og fattigt med udsagn som:

    Pludselig bliver et hold på bedste mellemleder-facon kaldt et “team”, noget godt er “nice”, og et sammenstød er pludselig et “clash”.

    • Der er en snobbeværdi, der gør, at man indfører smarte gloser for noget, man har i forvejen, siger Henrik Gottlieb.«

    Kandis “udgiver et nummer” og Kendrick Lamar “dropper et track”. Jeg forestiller mig ikke, at sprogbrugerne vælger “nice”, fordi de ikke kender til ordet “godt” eller andre nær-synonymer. Det han kalder “snobbeværdi” er jo enten bedst at betragte som stilistiske valg eller ren sociolingvistik, og måske endda begge dele. At kalde det snobberi udviser en mangel på forståelse — eller respekt — for de aspekter af sprog og sprogbrug, der ligger udenfor ordbogsdefinitioner.

    Folk tilpasser (oftest ubevidst) deres sprogbrug for at signalere, hvilken gruppe af sprogbrugere de tilhører. Det er ikke anderledes end, når en skribent i Information har haft godt gang i fremmedordbogen for at signalere, at det er Information man læser, ikke BT eller Billedbladet.

    /rant






  • so it’s probably just some points assigned for the answers and maybe some simple arithmetic.
    

    Why yes, that’s all that machine learning is, a bunch of statistics :)

    I know, but that’s not what I meant. I mean literally something as simple and mundane as assigning points per answer and evaluating the final score:

    // Pseudo code
    risk = 0
    if (Q1 == true) {
        risk += 20
    }
    if (Q2 == true) {
        risk += 10
    }
    // etc...
    // Maybe throw in a bit of
    if (Q28 == true) {
        if (Q22 == true and Q23 == true) {
            risk *= 1.5
        } else {
            risk += 10
        }
    }
    
    // And finally, evaluate the risk:
    if (risk < 10) {
        return "negligible"
    } else if (risk >= 10 and risk < 40) {
        return "low risk"
    }
    // etc... You get the picture.
    

    And yes, I know I can just write if (Q1) {, but I wanted to make it a bit more accessible for non-programmers.

    The article gives absolutely no reason for us to assume it’s anything more than that, and I apparently missed the part of the article that mentioned that the system had been in use since 2007. I know we had machine learning too back then, but looking at the project description here: https://eucpn.org/sites/default/files/document/files/Buena practica VIOGEN_0.pdf it looks more like they looked at a bunch of cases (2159) and came up with the 35 questions and a scoring system not unlike what I just described above.

    Edit: I managed to find this, which has apparently been taken down since (but thanks to archive.org it’s still available): https://web.archive.org/web/20240227072357/https://eticasfoundation.org/gender/the-external-audit-of-the-viogen-system/

    VioGén’s algorithm uses classical statistical models to perform a risk evaluation based on the weighted sum of all the responses according to pre-set weights for each variable. It is designed as a recommendation system but, even though the police officers are able to increase the automatically assigned risk score, they maintain it in 95% of the cases.

    … which incidentally matches what the article says (that police maintain the VioGen risk score in 95% of the cases).


  • The crucial point is: 8% of the decisions turn out to be wrong or misjudged.

    The article says:

    Yet roughly 8 percent of women who the algorithm found to be at negligible risk and 14 percent at low risk have reported being harmed again, according to Spain’s Interior Ministry, which oversees the system.

    Granted, neither “negligible” or “low risk” means “no risk”, but I think 8% and 14% are far too high numbers for those categories.

    Furthermore, there’s this crucial bit:

    At least 247 women have also been killed by their current or former partner since 2007 after being assessed by VioGén, according to government figures. While that is a tiny fraction of gender violence cases, it points to the algorithm’s flaws. The New York Times found that in a judicial review of 98 of those homicides, 55 of the slain women were scored by VioGén as negligible or low risk for repeat abuse.

    So in the 98 murders they reviewed, the algorithm put more than 50% of them at negligible or low risk for repeat abuse. That’s a fucking coin flip!



  • The article mentions that one woman (Stefany González Escarraman) went for a restraining order the day after the system deemed her at “low risk” and the judge denied it referring to the VioGen score.

    One was Stefany González Escarraman, a 26-year-old living near Seville. In 2016, she went to the police after her husband punched her in the face and choked her. He threw objects at her, including a kitchen ladle that hit their 3-year-old child. After police interviewed Ms. Escarraman for about five hours, VioGén determined she had a negligible risk of being abused again.

    The next day, Ms. Escarraman, who had a swollen black eye, went to court for a restraining order against her husband. Judges can serve as a check on the VioGén system, with the ability to intervene in cases and provide protective measures. In Ms. Escarraman’s case, the judge denied a restraining order, citing VioGén’s risk score and her husband’s lack of criminal history.

    About a month later, Ms. Escarraman was stabbed by her husband multiple times in the heart in front of their children.

    It also says:

    Spanish police are trained to overrule VioGén’s recommendations depending on the evidence, but accept the risk scores about 95 percent of the time, officials said. Judges can also use the results when considering requests for restraining orders and other protective measures.

    You could argue that the problem isn’t so much the algorithm itself as it is the level of reliance upon it. The algorithm isn’t unproblematic though. The fact that it just spits out a simple score: “negligible”, “low”, “medium”, “high”, “extreme” is, IMO, an indicator that someone’s trying to conflate far too many factors into a single dimension. I have a really hard time believing that anyone knowledgeable in criminal psychology and/or domestic abuse would agree that 35 yes or no questions would be anywhere near sufficient to evaluate the risk of repeated abuse. (I know nothing about domestic abuse or criminal psychology, so I could be completely wrong.)

    Apart from that, I also find this highly problematic:

    [The] victims interviewed by The Times rarely knew about the role the algorithm played in their cases. The government also has not released comprehensive data about the system’s effectiveness and has refused to make the algorithm available for outside audit.







  • Jeg spiser ikke engang den slags ramen, så forbuddet rammer mig “kun” principielt, men det er simpelthen det vildeste klovneshow, det her.

    Af rapporten fra DTU fremgår det også, at der ikke er foretaget egentlige målinger af nudlernes indhold af den kemiske forbindelse capsaicin, som er i chili.

    I stedet har de eksperter, der har lavet rapporten for styrelsen, regnet sig frem til indholdet ved at læse beskrivelser af produkterne på en hjemmeside, hvor de blev solgt.

    På hjemmesiden asiatorvet.dk har der blandt andet stået: “OBS: denne nærdødbringende version har mere end 13.000 fucking Scoville”. Scoville er en enhed, man kan beregne capsaicinindholdet ud fra.

    Ud fra den beskrivelse har eksperterne beregnet, hvor højt indholdet af capsaicin er i nudlerne.

    Eksperterne noterer sig desuden i rapporten, at der på hjemmesiden er billeder af tre “drenge/unge mænd”.

    - Ud fra ansigtsudtryk og kropssprog ser det ud til, at to af drengene har ondt i maven eller brændende fornemmelse i mundhulen efter at have spist af nudlerne, står der i rapporten.

    Kan man ikke få lov til at læse den rapport? Det lyder som om, det kunne være en god intro til hvordan man ignorerer den videnskabelige metode, datagrundlag og fakta generelt, og bare skriver hvad man nu lige føler for i dag. Det lyder i hvert fald ikke som et grundlag, hvorpå fødevarestyrelsen bør agere — og da slet ikke med vendinger som “risiko for akutte forgiftninger” og lignende.

    Hvad er det for en “forbruger”, der har henvendt sig og hvem er den i familie/venner med fra fødevarestyrelsen?

    De må hellere tilbagekalde nisseøl pga. risiko for akut alkoholforgiftning. Fucking klaphatte…

    Edit: Jeg antager, at det her er billedet, de henviser til i rapporten: https://web.archive.org/web/20231003202700/https://asiatorvet.dk/shop/53-samyang/ Et promo-shot for produktet, der sælger sig selv på at være stærkt.

    Edit 2: Rapporten er her: https://janax.dk/wp-content/uploads/2024/06/Nudler-med-chili-6.-juni-2024.pdf Den er ikke helt så dum, som først antaget, men det er sgu stadig en usaglig affære.