van TechDirt

Subscribe to feed van TechDirt van TechDirt
Easily digestible tech news...
Bijgewerkt: 1 uur 58 min geleden

As Expected Senate Overwhelmingly Passes Unconstitutional SESTA Bill, Putting Lives In Danger

7 uur 3 min geleden

This was not unexpected, but earlier today the Senate easily passed SESTA/FOSTA (the same version the House passed a few weeks ago) by a 97 to 2 vote -- with only Senators Ron Wyden and Rand Paul voting against it. We've explained in great detail why the bill is bad. We've explained in great detail why the bill won't stop sex trafficking and will actually put sex workers' lives in more danger, while also stomping on free speech and the open internet at the same time (which some see as a feature rather than a bug). The Senate declined to put any fixes in place.

Senator Wyden, who had originally offered up an amendment that would have fixed at least one big problem with the bill (clarifying that doing any moderation doesn't subject you to liability for other types of content) pulled the amendment right before the vote, noting that there had been a significant, if dishonest, lobbying effort to kill those amendments, meaning it had no chance. He did note that because of the many problems of the bill, he fully expects that these issues will be revisited shortly.

As for the many problems of the bill... well, they are legion, starting with the fact that multiple parts of the bill appear to be unconstitutional. That's most obvious in the "ex post facto" clause that applies the new criminal laws to activities in the past, which is just blatantly unconstitutional. There are some other serious questions about other parts of the bill, including concerns about it violating the First Amendment as well. It seems likely that the law will be challenged in court soon enough.

In the meantime, though, the damage here is real. The clearest delineation of the outright harm this bill will cause can be seen in a Twitter thread from a lawyer who represents victims of sex trafficking, who tweeted last night just how much damage this will do. It's a long Twitter thread, but well worth reading. Among other things, she notes that sites like Backpage were actually really useful for finding victims of sex trafficking and in helping them get out of dangerous situations. She talks about how her own clients would disappear, and the only way she could get back in touch with them to help them was often through these platforms. And all that will be gone, meaning that more people will be in danger and it will be that much harder for advocates and law enforcement to help them. She similarly notes that many of the groups supporting SESTA "haven't gotten their hands dirty in the field" and don't really understand what's happening.

That's true on the internet side as well. Mike Godwin highlights the history before CDA 230 was law and the kinds of problems that come about when you make platforms liable for the speech of their users.

In Cubby, a federal judge suggested (in a closely reasoned opinion) that the proper First Amendment model was the bookstore – bookstores, under American law, are a constitutionally protected space for hosting other people’s expression. But that case was misinterpreted by a later decision (Stratton Oakmont, Inc. v. Prodigy Services Co., 1995), so lawyers and policy advocates pushed to include platform protections in the Telecommunications Act of 1996 that amounted to a statutory equivalent of the Cubby precedent. Those protections, in Section 230, allowed platform providers to engage in certain kinds of editorial intervention and selection without becoming transformed by their actions into “publishers” of users’ content (and thus legally liable for what users say).

In short, we at EFF wanted platform providers to be free to create humane digital spaces without necessarily acquiring legal liability for everything their users said and did, and with no legal compulsion to invade users’ privacy. We argued from the very beginning, about the need for service providers to be just, to support human rights even when they didn’t have to and to provide space and platforms for open creativity. The rules we worked to put into place later gave full bloom to the World Wide Web, to new communities on platforms like Facebook and Twitter and to collaborative collective enterprises like Wikipedia and open-source software.

Meanwhile the Senators who passed the bill will completely forget about all of this by next week, other than to pat themselves on the back and include 3 seconds in their next campaign ad about how they "took on big tech to stop sex trafficking." And, of course, people in Hollywood are laughing at how they pulled a fast one on the internet, and are already strategizing their next attacks on both CDA 230 and DMCA 512 (expect it soon).

None of those celebrating realize how much damage they've actually caused. They think they've "won" when they really did astounding levels of damage to both victims of sex trafficking and free speech in the same effort.

Permalink | Comments | Email This Story
Categorieën: Technieuws

ProPublica's Reporting Error Shows Why The Government Must Declassify Details Of Gina Haspel's Role In CIA Torture

wo, 03/21/2018 - 23:27

Last week, we wrote a bit about Donald Trump's nominee to head the CIA, Gina Haspel. That post highlighted a bunch of reporting about Haspel's role in running a CIA blacksite in Thailand that was a key spot in the CIA's torture program. Soon after we published it, ProPublica retracted and corrected an earlier piece -- on which much of the reporting about Haspel's connection to torture relied on. Apparently, ProPublica was wrong on the date at which Haspel started at the site, meaning that she took over soon after the most famous torture victim, Abu Zaubaydah, was no longer being tortured. Thus earlier claims that she oversaw his inhumane, brutal, and war crimes-violating torture were incorrect. To some, this error, has been used to toss out all of the concerns and complaints about Haspel, even though reporters now agree that she did oversee the torture of at least one other prisoner at a time when other CIA employees were seeking to transfer out of the site out of disgust for what the CIA was doing.

However, what this incident should do is make it clear that the Senate should not move forward with Haspel's nomination unless the details of her involvement is declassified. As Trevor Timm notes, ProPublica's error was not due to problematic reporting, but was the inevitable result of the CIA hiding important information from the public.

In its report, ProPublica was forced to use a combination of heavily censored CIA and court documents and anonymous sources to piece together what happened over a decade ago in the secret CIA prison Haspel ran. Many of the documents were made public only after years of Freedom of Information Act fights brought by public interest groups, while many other documents on Haspel’s CIA tenure remain classified.

These types of unintentional mistakes would be almost entirely avoidable if journalists did not have to read between the lines of ridiculous government redactions meant to cover up crimes.

The most obvious example of this is the Senate’s 500-page summary of the torture report it released in 2014. How many times is Haspel named in the torture report? We have no idea. The redactions on the report completely obscured the names of all participants in the torture program, including the CIA personnel involved, as well as their partners in crime from authoritarian dictatorships like Libya, Egypt, and Syria.

At the time of the report’s release, advocates proposed that CIA personnel should at least be identified by pseudonyms so that the public could understand how many people were involved and if a particular person was responsible for more than others. That proposal was rejected as well.

Because of that, mistakes like the one ProPublica made are inevitable -- because the CIA (and those involved in declassifying what little was released from the Senate's CIA torture report) made it inevitable. Conveniently, this allows the CIA to discredit journalists who are working to report on these important issues.

So this should give even more weight to the demands of various human rights groups to declassify the details of Haspel's involvement. There can be no legitimate national security interest in continuing to keep this information secret. The program was ended long ago. It's been confirmed that Haspel ran the site and was part of the process to destroy the tapes of what happened. But there are more details that must be revealed.

Indeed, the Daily Beast claims that it has separate confirmation that Haspel actually was "in a position of responsibility" during the Zubadaydah interrogation, though she wasn't present at the site. So it's possible that even ProPublica's "correction" is at least somewhat misleading. Which, again, is all the more reason to reveal to the public what actual authority and responsibility she had over the torture program.

And, as a side note, it's worth remembering that former CIA officer, John Kiriakou, was sent to jail for revealing the existence of the torture program. And now the woman who appears to have had authority over at least some of it (as well as the cover-up) may get to lead the CIA? Shouldn't our Senators at least demand a full public understanding of her role in all of it first?

Permalink | Comments | Email This Story
Categorieën: Technieuws

Russian Court Says Telegram Must Hand Over Encryption Keys To State Intelligence Service

wo, 03/21/2018 - 21:25

Here's an idea for the FBI, gift-wrapped and signed "From Russia, With Love."

Telegram, the encrypted messaging app that’s prized by those seeking privacy, lost a bid before Russia’s Supreme Court to block security services from getting access to users’ data, giving President Vladimir Putin a victory in his effort to keep tabs on electronic communications.

Supreme Court Judge Alla Nazarova on Tuesday rejected Telegram’s appeal against the Federal Security Service, the successor to the KGB spy agency which last year asked the company to share its encryption keys. Telegram declined to comply and was hit with a fine of $14,000. Communications regulator Roskomnadzor said Telegram now has 15 days to provide the encryption keys.

Who needs backdoors when messaging services are willing to keep their customers' front doors keys on hand for you? Sure, Telegram doesn't want to turn these over to the FSB, but its decision to hold onto encryption keys means they're available to be had. Telegram is appealing this decision, so customers' keys are safe for now, but there's zero chance the FSB is going to back down.

The FSB has also provided a ridiculous argument for the FBI to use when demanding companies retain keys for easy law enforcement access. According to the FSB's interpretation of the Russian constitution, no privacy violations occur when the government obtains citizens' encryption keys.

The security agency, known as the FSB, argued in court that obtaining the encryption keys doesn’t violate users’ privacy because the keys by themselves aren’t considered information of restricted access.

Clever. The keys are not restricted info. Everything accessible with the keys is. This isn't completely unlike judicial assertions that passwords are not evidence, even if relinquishing them then gives the government access to plenty of evidence. In this case, the FSB is collecting the keys to everyone's houses and promising not to open them up and take a look around whenever it feels the urge. The best way to protect users' privacy is to not hold the keys. The second best way is to take your business elsewhere (but in reverse, I guess) when local governments claim the only way you can do business locally is by placing users' communications directly in the government's hands.

If Telegram is forced to hand the keys over, it will be the last communications company in Russia to do so. All others have "registered" with the state communications agency, putting their users' communications directly in the Russian government's hands. If Telegram decides to pull out of the market, it will leave behind nearly 10 million users. Many of those will probably end up utilizing services the FSB has already tapped. Others may go overseas for uncompromised messaging services. But in the end, the FSB will get what it wants.

As for Telegram, it's facing a tough choice. With an initial coin offering in the works, it may not be willing to shed 10 million users and risk lowering its value. On the other hand, it may find standing up for 10 million users isn't something that matters to investors. Unfortunately, pushing back against the FSB on behalf of its users still may result in the loss of several million users once the Russian high court reaches its expected decision several months down the road. It still has the option of moving its operations out of the reach of the Russian government while still offering its services to Russian citizens. This may be the choice it has to make if it wants its millions of Russian users to avoid being stuck with compromised accounts.

Permalink | Comments | Email This Story
Categorieën: Technieuws

Appeals Court Says It's Okay To Copyright An Entire Style Of Music

wo, 03/21/2018 - 19:55

Oh boy. We had hoped that the 9th Circuit might bring some sanity back to the music copyright world by overturning the awful "Blurred Lines" ruling that has already created a massive chilling effect among musicians... but no such luck. In a ruling released earlier this morning, the 9th Circuit largely affirmed the lower court ruling that said that Pharrell and Robin Thicke infringed on Marvin Gaye's copyright by writing a song, "Blurred Lines," that was clearly inspired by Gaye's "Got To Give It Up."

No one has denied that the songs had similar "feels" but "feeling" is not copyrightable subject matter. The compositions of the two songs were clearly different, and the similarity in feel was, quite obviously, paying homage to the earlier work, rather than "copying" it. For what it's worth, there appears to be at least some hesitation on the part of the majority ruling, recognizing that this ruling could create a huge mess in the music world, so it tries (and mostly fails) to insist that this ruling is on narrow grounds, specific to this case (and much of it on procedural reasons, which is a kind way of suggesting that the lawyers for Pharrell and Thicke fucked up royally). As the court summarizes:

We have decided this case on narrow grounds. Our conclusions turn on the procedural posture of the case, which requires us to review the relevant issues under deferential standards of review.

Throughout the majority ruling, you see things like the following:

We are bound by the “‘limited nature of our appellate function’ in reviewing the district court’s denial of a motion for a new trial.” Lam, 869 F.3d at 1084 (quoting Kode, 596 F.3d at 612). So long as “there was some ‘reasonable basis’ for the jury’s verdict,” we will not reverse the district court’s denial of a motion for a new trial. Id. (quoting Molski, 481 F.3d at 729). “[W]here the basis of a Rule 59 ruling is that the verdict is not against the weight of the evidence, the district court’s denial of a Rule 59 motion is virtually unassailable.” Id. (quoting Kode, 596 F.3d at 612). When that is the case, we reverse “only when there is an absolute absence of evidence to support the jury’s verdict.” Id. (quoting Kode, 596 F.3d at 612). “It is not the courts’ place to substitute our evaluations for those of the jurors.” Union Oil Co. of Cal. v. Terrible Herbst, Inc., 331 F.3d 735, 743 (9th Cir. 2003). Of note, we are “reluctant to reverse jury verdicts in music cases” on appeal, “[g]iven the difficulty of proving access and substantial similarity.” 17 Three Boys Music, 212 F.3d at 481.

The Thicke Parties face significant, if not unsurmountable, hurdles. First, we are generally reluctant to disturb the trier of fact’s findings, and have made clear that “[w]e will not second-guess the jury’s application of the intrinsic test.” Id. at 485. Second, our review is necessarily deferential where, as here, the district court, in denying the Rule 59 motion, concluded that the verdict was not against the clear weight of the evidence. Finell testified that nearly every bar of “Blurred Lines” contains an area of similarity to “Got To Give It Up.” Even setting aside the three elements that trouble the Thicke Parties (“Theme X,” the bass line, and the keyboard parts), Finell and Dr. Monson testified to multiple other areas of extrinsic similarity, including the songs’ signature phrases, hooks, bass melodies, word painting, the placement of the rap and “parlando” sections, and structural similarities on a sectional and phrasing level. Thus, we cannot say that there was an absolute absence of evidence supporting the jury’s verdict.

That's just one example of many where the court more or less says "our hands are tied to review the jury's decision."

The whole thing is a mess, though, and is going to create lots of problems. And, honestly, I don't think I can do a better job than the one dissenting judge, Judge Jacqueline Nguyen, who seems to fully understand the issues at play and what a disaster this ruling is.

The majority allows the Gayes to accomplish what no one has before: copyright a musical style. “Blurred Lines” and “Got to Give It Up” are not objectively similar. They differ in melody, harmony, and rhythm. Yet by refusing to compare the two works, the majority establishes a dangerous precedent that strikes a devastating blow to future musicians and composers everywhere.

Judge Nguyen isn't pulling any punches.

The majority, like the district court, presents this case as a battle of the experts in which the jury simply credited one expert’s factual assertions over another’s. To the contrary, there were no material factual disputes at trial. Finell testified about certain similarities between the deposit copy of the “Got to Give It Up” lead sheet and “Blurred Lines.” Pharrell Williams and Robin Thicke don’t contest the existence of these similarities. Rather, they argue that these similarities are insufficient to support a finding of substantial similarity as a matter of law. The majority fails to engage with this argument.

Finell identified a few superficial similarities at the “cell” level by focusing on individual musical elements, such as rhythm or pitch, entirely out of context. Most of these “short . . . pattern[s]” weren’t themselves protectable by copyright, and Finell ignored both the other elements with which they appeared and their overall placement in each of the songs. Her analysis is the equivalent of finding substantial similarity between two pointillist paintings because both have a few flecks of similarly colored paint. A comparison of the deposit copy of “Got to Give it Up” and “Blurred Lines” under the extrinsic test leads to only one conclusion. Williams and Thicke were entitled to judgment as a matter of law.

Also, I'm glad to see a judge recognize this point (even if it's in a dissent) that many in the legacy copyright industries deny (even though it's actually to their benefit):

The purpose of copyright law is to ensure a robust public domain of creative works.... While the Constitution authorizes Congress to grant authors monopoly privileges on the commercial exploitation of their output, see U.S. Const. art. I, § 8, cl. 8, this “special reward” is primarily designed to motivate authors’ creative activity and thereby “allow the public access to the products of their genius.”... Accordingly, copyrights are limited in both time and scope. See U.S. Const. art. I, § 8, cl. 8 (providing copyright protection only “for limited Times”); Sony Corp., 464 U.S. at 432 (“This protection has never accorded the copyright owner complete control over all possible uses of his work.”); see also Berlin v. E.C. Publ’ns, Inc., 329 F.2d 541, 544 (2d Cir. 1964) (“[C]ourts in passing upon particular claims of infringement must occasionally subordinate the copyright holder’s interest in a maximum financial return to the greater public interest in the development of art, science and industry.”).

Judge Nguyen also points out a key point that you would hope that any judge hearing a copyright case would actually understand: copyright only covers the actual author's expression (and only the new and unique parts of it -- and only in limited ways). But that's not what the ruling in this case says:

An important limitation on copyright protection is that it covers only an author’s expression—as opposed to the idea underlying that expression.... Copyright “encourages others to build freely upon the ideas and information conveyed by a work.” Feist Publ’ns, Inc. v. Rural Tel. Serv. Co., 499 U.S. 340, 349–50 (1991) (citing Harper & Row Publishers, Inc. v. Nation Enters., 471 U.S. 539, 556–57 (1985))....


Such accommodations are necessary because “in art, there are, and can be, few, if any, things, which in an abstract sense, are strictly new and original throughout.” Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569, 575 (1994) (quoting Emerson v. Davies, 8 F. Cas. 615, 619 (C.C.D. Mass. 1845) (Story, J.)). Every work of art “borrows, and must necessarily borrow, and use much which was well known and used before.” Id. (quoting Emerson, 8 F. Cas. at 619); see 1 Melville D. Nimmer & David Nimmer, Nimmer on Copyright § 2.05[B] (rev. ed. 2017) (“In the field of popular songs, many, if not most, compositions bear some similarity to prior songs.”). But for the freedom to borrow others’ ideas and express them in new ways, artists would simply cease producing new works—to society’s great detriment.

And while the dissent points out that two songs may share the same "groove," that's not nearly enough for it to be copyright infringement.

“Blurred Lines” clearly shares the same “groove” or musical genre as “Got to Give It Up,” which everyone agrees is an unprotectable idea. See, e.g., 2 William F. Patry, Patry on Copyright § 4:14 (2017) (“[T]here is no protection for a communal style . . . .”). But what the majority overlooks is that two works in the same genre must share at least some protectable expression in order to run afoul of copyright law.

And, incredibly, as Judge Nguyen points out, the majority fails to even attempt to say what copyrightable expression was duplicated in Blurred Lines:

The majority doesn’t explain what elements are protectable in “Got to Give It Up,” which is surprising given that our review of this issue is de novo. See Mattel, Inc. v. MGA Entm’t, Inc., 616 F.3d 904, 914 (9th Cir. 2010). But by affirming the jury’s verdict, the majority implicitly draws the line between protectable and unprotectable expression “so broadly that future authors, composers and artists will find a diminished store of ideas on which to build their works.” Oravec v. Sunny Isles Luxury Ventures, L.C., 527 F.3d 1218, 1225 (11th Cir. 2008) (quoting Meade v. United States, 27 Fed. Cl. 367, 372 (Fed. Cl. 1992)).

Worse still, the majority invokes the oft-criticized “inverse ratio” rule to suggest that the Gayes faced a fairly low bar in showing substantial similarity just because Williams and Thicke conceded access to “Got to Give It Up.”... The issue, however, isn’t whether Williams and Thicke copied “Got to Give It Up”—there’s plenty of evidence they were attempting to evoke Marvin Gaye’s style. Rather, the issue is whether they took too much. Copying in and of itself “is not conclusive of infringement. Some copying is permitted.” ... Copying will only have legal consequences if it “has been done to an unfair extent.” ... In determining liability for copyright infringement, the critical and ultimate inquiry is whether “the copying is substantial.” Id.

And of course, what has been "copied" to create the "groove" is not subject to copyright protection. Which you would think would be important in court. But the majority didn't bother.

The Gayes don’t contend that every aspect of “Blurred Lines” infringes “Got to Give It Up.” Rather, they identify only a few features that are present in both works. These features, however, aren’t individually protectable. And when considered in the works as a whole, these similarities aren’t even perceptible, let alone substantial.

Judge Nguyen then goes through, in fairly great detail, to explain (including with sheet music examples) why the copyright-protectable elements of the composition have not been copied here. This is the kind of analysis that should have been done before, and should have happened at the lower court. But it did not.

Judge Nguyen then points out that ruling this way on narrow procedural grounds is also nonsense.

The majority insists that the verdict is supported by the evidence but tellingly refuses to explain what that evidence is. Instead, it defends its decision by arguing that a contrary result is impossible due to Williams and Thicke’s purported procedural missteps.... While the procedural mechanism for granting relief is beside the point given the majority’s holding, there’s no such obstacle here.

I agree that we normally are not at liberty to review the district court’s denial of summary judgment after a full trial on the merits.... This rule makes eminent sense. Once a trial has concluded, any issues relating to the merits of the parties’ dispute “should be determined by the trial record, not the pleadings nor the summary judgment record.” ... However, there is little difference between reviewing a summary judgment ruling and a jury verdict other than the source of the factual record... and here there are no material factual disputes. A completed trial does not prevent us from reviewing the denial of summary judgment “where the district court made an error of law that, if not made, would have required the district court to grant the motion.”

Nguyen really hits back on the majority for suggesting that this is just a dispute between competing experts over what was similar and what was not. As she rightly points out, that's a question that comes up only after you've shown that the elements being copied are actually copyrightable subject matter. And the lower court totally failed to do that, meaning this is an issue of law, not one of disputed facts. The law says these elements aren't protected. And that's important, but the court ignored it entirely.

No one disputes that the two works share certain melodic snippets and other compositional elements that Finell identified. The only dispute regarding these similarities is their legal import—are the elements protectable, and are the similarities substantial enough to support liability for infringement? ...

By characterizing these questions as a factual dispute among experts, the majority lays bare its misconception about the purpose of expert testimony in music infringement cases. As with any expert witness, a musicologist can’t opine on legal conclusions, including the ultimate question here—substantial similarity.... Her role is to identify similarities between the two works, describe their nature, and explain whether they are “quantitatively or qualitatively significant in relation to the composition as a whole,”.... The value of such testimony is to assist jurors who are unfamiliar with musical notation in comparing two pieces of sheet music for extrinsic similarity in the same way that they would compare two textual works.

In other words, the lower court, and the majority, both got so caught up in the Gayes' "expert" talking about the similarities of "the groove" that they forgot to even bother to check if a "groove" is copyrightable.

Finally, Nguyen points out that this disaster is going to haunt lots of people -- likely including the Gaye Estate, given how much of Gaye's own work was built on those who came before him:

The Gayes, no doubt, are pleased by this outcome. They shouldn’t be. They own copyrights in many musical works, each of which (including “Got to Give It Up”) now potentially infringes the copyright of any famous song that preceded it.

Be careful what you wish for. You just might get it. And then get sued on the same grounds. It seems quite likely that we'll now see a flood of similar lawsuits (some have started already, but this will open the floodgates).

Permalink | Comments | Email This Story
Categorieën: Technieuws

How 'Regulating Facebook' Could Make Everyone's Concerns Worse, Not Better

wo, 03/21/2018 - 18:38

In my last post, I described why it was wrong to focus on claims of Facebook "selling" your data as the "problem" that came out over the weekend concerning Cambridge Analytica and the data it had on 50 million Facebook users. As we described in detail in that post, that's not the problem at all. Instead, much of the problem has to do with Facebook's utter failure to be transparent in a way that matters -- specifically in a way that its users actually understand what's happening (or what may happen) to their data. Facebook would likely respond that it has tried to make that information clear (or, alternatively, it may say that it can't force users to understand what they don't take the time to understand). But I don't think that's a good answer. As we've learned, there's a lot more at stake here than I think even Facebook recognized, and providing much more real transparency (rather than superficial transparency) is what's necessary.

But that's not what most people are suggesting. For example, a bunch of people are calling for "Know Your Customer" type regulations similar to what's found in the financial space. Others seem to just be blindly demanding "oversight" without being able to clearly articulate what that even means. And some are bizarrely advocating "nationalizing Facebook", which would literally mean giving billions in taxpayer dollars to Mark Zuckerberg. But these "solutions" won't solve the actual issues. In that article about "KYC" rules, there's the following, for example:

“They should know who’s paying them,” said Vasant Dhar, a professor of information systems at New York University, “because the consequences are very serious.” In December, Dhar wrote an op-ed calling for social media regulation — specifically, something similar to the “know your customer” laws that apply to banks. “The US government and our regulators need to understand how digital platforms can be weaponized and misused against its citizens, and equally importantly, against democracy itself,” he wrote at the time.

Antonio García-Martinez, Facebook’s first targeted ads manager, thinks so too. “For certain classes of advertising, like politics, a random schmo with a credit card shouldn’t just be able to randomly run ads over the entire Facebook system,” he told me.

Except... that has literally nothing to do with what the Cambridge Analytica controversy is all about. And, anyway, as we've discussed before, the Russians bent over backwards to pretend to be Americans when buying ads, so it's not like KYC rules would really have helped for the ads. And the whole Cambridge Analytica may have involved some ads (and lots of other stuff), but Facebook knew who "the customer" was in that instance. And it knew how an "academic" was slurping up some data for "academic research." Knowing your customer wouldn't have made the slightest difference at all here.

Even Tim Berners-Lee, who recently stirred the pot by suggesting regulations for social media doesn't seem to have much concrete information that would have mattered here.

What’s more, the fact that power is concentrated among so few companies has made it possible to weaponise the web at scale. In recent years, we’ve seen conspiracy theories trend on social media platforms, fake Twitter and Facebook accounts stoke social tensions, external actors interfere in elections, and criminals steal troves of personal data.

We’ve looked to the platforms themselves for answers. Companies are aware of the problems and are making efforts to fix them — with each change they make affecting millions of people. The responsibility — and sometimes burden — of making these decisions falls on companies that have been built to maximise profit more than to maximise social good. A legal or regulatory framework that accounts for social objectives may help ease those tensions.

I don't think Tim is wrong per se in arguing that there are issues in how much power is concentrated between a small group of large companies -- but I'm not sure that "a legal or regulatory framework" actually fixes any of that. Indeed, it seems highly likely to do the reverse.

As Ben Thompson notes in his own post about this mess, most of the regulatory suggestions being proffered will lock in Facebook as an entrenched incumbent. That's because it will (a) create barriers that Facebook can deal with, but startups cannot and (b) focus on "cementing" Facebook's model (with safeguards) rather than letting the next wave of creative destruction take down Facebook.

It seems far more likely that Facebook will be directly regulated than Google; arguably this is already the case in Europe with the GDPR. What is worth noting, though, is that regulations like the GDPR entrench incumbents: protecting users from Facebook will, in all likelihood, lock in Facebook’s competitive position.

This episode is a perfect example: an unintended casualty of this weekend’s firestorm is the idea of data portability: I have argued that social networks like Facebook should make it trivial to export your network; it seems far more likely that most social networks will respond to this Cambridge Analytica scandal by locking down data even further. That may be good for privacy, but it’s not so good for competition. Everything is a trade-off.

Note that last bit? A good way to take away Facebook's dominance is to enable others to compete in the space. The best way to do that? Make it easy for people to switch from Facebook to upstart competitors. The best way to do that? Make it easier for Facebook users to export their data... and use it on another service. But as soon as you do that, you're actually right back into the risky zone. Why is Facebook in so much hot water right now? Because it made it too easy to export user data to third party platforms! And, any kind of GDPR-type solution is just going to lock down that data, rather than enabling them to help seed competition.

Cory Doctorow, over at EFF, has what I think is the most reasonable idea of all: enable third parties to build tools that help Facebook's (and every other platform's!) users better manager and understand their privacy settings and what's being done with their data. That's an actual solution to the problem we laid out in the previous post: Facebook's failed transparency. Doctorow compares the situation to ad-blockers. Ads became too intrusive, and users were able to make use of 3rd party services to stop the bad stuff. We should be able to do something similar with privacy and data controls. But, thanks to some pretty dumb laws and court rulings (including a key one that Facebook itself caused), that's really not possible:

This week, we made you a tutorial explaining the torturous process by which you can change your Facebook preferences to keep the company’s “partners” from seeing all your friends’ data. But what many folks would really like to do is give you a tool that does it for you: go through the tedious work of figuring out Facebook’s inscrutable privacy dashboard, and roll that expertise up in a self-executing recipe — a piece of computer code that autopiloted your browser to login to Facebook on your behalf and ticked all the right boxes for you, with no need for you to do the fiddly work.

But they can’t. Not without risking serious legal consequences, at least. A series of court decisions — often stemming from the online gaming world, sometimes about Facebook itself — has made fielding code that fights for the user into a legal risk that all too few programmers are willing to take.

That's a serious problem. Programmers can swiftly make tools that allow us to express our moral preferences, allowing us to push back against bad behavior long before any government official can be convinced to take an interest — and if your government never takes an interest, or if you are worried about the government's use of technology to interfere in your life, you can still push back, with the right code.

So if we really, truly, want to deal with the problem, then we need to push for more control by the end users. Let users control and export their data, and let people build tools that allow them to do so, and to control and transparently understand what others do with their data.

If someone comes up with a "regulatory regime" that does that, it would be fantastic. But so far, nearly every suggestion I've seen has gone in the other direction. They will do things like force Facebook to "lock down" its data even more, making it harder for users to extract it, or for third parties to provide users the tools they need to control their own data. They'll put useless, but onerous, Know Your Customer rules that Facebook will be able to throw money at to solve, but every smaller platform will find incredibly costly.

I'm not optimistic about how all of this works out. Even if you absolutely hate Facebook and think the company is evil, doesn't care one wit about your privacy, and is run by the most evil person on the planet, you should be especially worried with the regulatory suggestions that are coming. They're not going to help. They're going to entrench Facebook and lock down your data.

Permalink | Comments | Email This Story
Categorieën: Technieuws

Daily Deal: The Project Management Professional Certification Training Bundle

wo, 03/21/2018 - 18:33

The Project Management Professional Certification Training Bundle features 10 courses designed to get you up and running as a project manager. You'll prepare for certification exams by learning the fundamental knowledge, terminology, and processes of effective project management. Various methods of project management are covered as well including Six Sigma, Risk Management, Prince and more. The bundle is on sale for $49.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Permalink | Comments | Email This Story
Categorieën: Technieuws

Facebook Has Many Sins To Atone For, But 'Selling Data' To Cambridge Analytica Is Not One Of Them

wo, 03/21/2018 - 17:25

Obviously, over the past few days there's been plenty of talk about the big mess concerning Cambridge Analytica using data on 50 million Facebook users. And, with that talk has come all sorts of hot takes and ideas and demands -- not all of which make sense. Indeed, it appears that there's such a rush to condemn bad behavior that many are not taking the time to figure out exactly what bad behavior is worth condemning. And that's a problem. Because if you don't understand the actual bad behavior, then your "solutions" will be misplaced. Indeed, they could make problems worse. And... because I know that some are going to read this post as a defense of Facebook, let me be clear (as the title of this post notes): Facebook has many problems, and has done a lot of bad things (some of which we'll discuss below). But if you mischaracterize those "bad" things, then your "solutions" will not actually solve them.

One theme that I've seen over and over again in discussions about what happened with Facebook and Cambridge Analytica is the idea that Facebook "sold" the data it had on users to Cambridge Analytica (alternatively that Cambridge Analytica "stole" that data). Neither is accurate, and I'm somewhat surprised to see people who are normally careful about these things -- such as Edward Snowden -- harping on the "selling data" concept. What Facebook actually does is sell access to individuals based on their data and, as part of that, open up the possibility for users to give some data to companies, but often unwittingly. There's a lot of nuance in that sentence, and many will argue that for all reasonable purposes "selling data" and my much longer version are the same thing. But they are not.

So before we dig into why they're so different, let's point out one thing that Facebook deserves to be yelled at over: it does not make this clear to users in any reasonable way. Now, perhaps that's because it's not easy to make this point, but, really, Facebook could at least do a better job of explaining how all of this works. Now, let's dig in a bit on why this is not selling data. And for that, we need to talk about three separate entities on Facebook. First are advertisers. Second are app developers. Third are users.

The users (all of us) supply a bunch of data to Facebook. Facebook, over the years, has done a piss poor job of explaining to users what data it actually keeps and what it does with that data. Despite some pretty horrendous practices on this front early on, the company has tried to improve greatly over the years. And, in some sense, it has succeeded -- in that users have a lot more granular control and ability to dig into what Facebook is doing with their data. But, it does take a fair bit of digging and it's not that easy to understand -- or to understand the consequences of blocking some aspects of it.

The advertisers don't (as is all too commonly believed) "buy" data from Facebook. Instead, the buy the ability to put ads into the feeds of users who match certain profiles. Again, some will argue this is the same thing. It is not. From merely buying ads, the advertiser gets no data in return about the users. It just knows what sort of profile info it asked for the ads to appear against, and it knows some very, very basic info about how many people saw or interacted with the ads. Now, if the ad includes some sort of call to action, the advertiser might then gain some information directly from the user, but that's still at the user's choice.

The app developer ecosystem is a bit more complicated. Back in April of 2010, Facebook introduced the Open Graph API, which allowed app developers to hook into the data that users were giving to Facebook. Here's where "things look different in retrospect" comes into play. The original Graph API allowed developers to access a ton of information. In retrospect, many will argue that this created a privacy nightmare (which, it kinda did!), but at the same time, it also allowed lots of others to build interesting apps and services, leveraging that data that users themselves were sharing (though, not always realizing they were sharing it). It was actually a move towards openness in a manner that many considered benefited the open web by allowing other services to build on top of the Facebook social graph.

There is one aspect of the original API that does still seem problematic -- and really should have been obviously problematic right from the beginning. And this is another thing that it's entirely appropriate to slam Facebook for not comprehending at the time. As part of the API, developers could not only get access to all this information about you... but also about your friends. Like... everything. From the original Facebook page, you can see all the "friend permissions" that were available. These are better summarized in the following chart from a recent paper analyzing the "collateral damage of Facebook apps."

If you can't read that... it's basically a ton of info from friends, including their likes, birthdays, activities, religion, status updates, interests, etc. You can kind of understand how Facebook ended up thinking this was a good idea. If an app developer was designing an app to provide you a better Facebook experience, it might be nice for that app to have access to all that information so it could display it to you as if you were using Facebook. But (1) that's not how this ever worked (and, indeed, Facebook went legal against services that tried to provide a better Facebook experience) and (2) none of this was made clear to end-users -- especially the idea that in sharing your data with your friends, they might cough up literally all of it to some shady dude pushing a silly "personality test" game.

But, of course, as I noted in my original post, in some cases, this set up was celebrated. When the Obama campaign used the app API this way to reach more and more people and collect all the same basic data, it was celebrated as being a clever "voter outreach" strategy. Of course, the transparency levels were different there. Users of the Obama app knew what they were supporting -- though didn't perhaps realize they were revealing a lot of friend data at the same time. Users of Cambridge Analytica's app... just thought they were taking a personality quiz.

And that brings us to the final point here: Cambridge Analytica, like many others, used this setup to suck up a ton of data, much of it from friends of people who agreed to install a personality test app (and, a bunch of those users were actually paid via Mechanical Turk to basically cough up all their friends' data). There are reasonable questions about why Facebook set up its API this way (though, as noted above, there were defensible, if short-sighted, reasons). There are reasonable questions about why Facebook wasn't more careful about watching what apps were doing with the data they had access to. And, most importantly, there are reasonable questions about how transparent Facebook was to its end users through all of this (hint: it was not at all transparent).

So there are plenty of things that Facebook clearly did wrong, but it wasn't about selling data to Cambridge Analytica and it wasn't Cambridge Analytica "stealing" data. The real problem was in how all of this was hidden. It comes back to transparency. Facebook could argue that this information was all "public" -- which, uh, okay, it was, but it was not public in a way that the average Facebook user (or even most "expert" Facebook users) truly understood. So if we're going to bash Facebook here, it should be for the fact that none of this was clear to users.

Indeed, even though Facebook shut down this API in April of 2015 (after deprecating it in April of 2014), most users still had no idea just how much information Facebook apps had about them and their friends. Today, the new API still coughs up a lot more info than people realize about themselves (and, again, that's bad and Facebook should improve that), but no longer your friends' data as well.

So slam Facebook all your want for failing to make this clear. Slam Facebook for not warning users about the data they were sharing -- or that their friends could share. Slam Facebook for not recognizing how apps were sucking up this data and the privacy implications related to that. But don't slam Facebook for "selling your data" to advertisers, because that's not what happened.

I was going to use this post to also discuss why this misconception is leading to bad policy prescriptions, but this one is long enough already, so stay tuned for that one next. Update: And here's that post.

Permalink | Comments | Email This Story
Categorieën: Technieuws

If You're Pissed About Facebook's Privacy Abuses, You Should Be Four Times As Angry At The Broadband Industry

wo, 03/21/2018 - 14:20

To be very clear, Facebook is well deserving of the mammoth backlash the company is experiencing in the wake of the Cambridge Analytica revelations. Especially since Facebook's most substantive reaction to date has been to threaten lawsuits against news outlets for telling the truth. And, like most of these stories, it's guaranteed that the core story is only destined to get worse as more and more is revealed about the way such casual handling of private consumer data is pretty much routine not only at Facebook, but everywhere.

Despite the fact that consumer privacy apathy is now bone-grafted to the DNA of global corporate culture (usually only bubbling up after a scandal breaks), the outrage over Facebook's lack of transparency has been monumental.

Verizon-owned Techcrunch, for example, this week went so far as to call Facebook a "cancer," demanding that readers worried about privacy abuses delete their Facebook accounts. The #Deletefacebook hashtag has been trending, and countless news outlets have subsequently provided wall to wall coverage on how to delete your Facebook account (or at least delete older Facebook posts and shore up app-sharing permissions) in order to protect your privacy.

And while this outrage is well-intentioned and certainly justified, a lot of it seems a touch naive. Many of the folks that are busy deleting their Facebook accounts are simultaneously still perfectly happy to use their stock smartphone on a major carrier network, seemingly oblivious to the ugly reality that the telecom sector has been engaged, routinely, in far worse privacy violations for the better part of the last two decades. Behavior that has just as routinely failed to see anywhere near the same level of outrage by consumers, analysts or the tech press.

You'll recall that a decade ago, ISPs were caught routinely hoovering up clickstream data (data on each and every website you visit), then selling it to whoever was willing to pony up the cash. When ISPs were asked to share more detail on this data collection by the few outlets that thought this might not be a good idea, ISP executives would routinely play dumb and mute (they still do). And collectively, the lion's share of the press and public generally seemed OK with that.

From there, we learned that AT&T and Verizon were effectively bone grafted to the nation's intelligence apparatus, and both companies were caught routinely helping Uncle Sam not only spy on Americans without warrants, but providing advice on how best to tap dance around wiretap and privacy laws. When they were caught spying on Americans in violation of the law, these companies' lobbyists simply convinced the government to change the law to make this behavior retroactively legal. Again, I can remember a lot of tech news outlets justifying this apathy for national security reasons.

Once these giant telecom operators were fused to the government's data gathering operations, holding trusted surveillance partners accountable for privacy abuses (or much of anything else) increasingly became an afterthought. Even as technologies like deep packet inspection made it possible to track and sell consumer online behavior down to the millisecond. As the government routinely signaled that privacy abuses wouldn't be seriously policed, large ISPs quickly became more emboldened when it came to even more "creative" privacy abuses.

Like the time Verizon Wireless was caught covertly modifying user data packets to track users around the internet without telling them or letting them opt out. It took two years for security researchers to even realize what Verizon was doing, and another six months for Verizon to integrate an opt out function. But despite a wrist slap by the FCC, the company continues to use a more powerful variant of the same technology across its "Oath" ad empire (the combination of AOL and Yahoo) without so much as a second glance from most news outlets.

Or the time that AT&T, with full regulatory approval, decided it would be cool to charge its broadband customers hundreds of additional dollars per year just to protect their own privacy, something the company had the stones to insist was somehow a "discount." Comcast has since explored doing the same thing in regulatory filings (pdf), indicating that giant telecom monopolies are really keen on making consumer privacy a luxury option. Other companies, like CableOne, have crowed about using credit data to justify providing low income customers even worse support than the awful customer service the industry is known for.

And again, this was considered perfectly ok by government regulators, and (with a few exceptions) most of these efforts barely made a dent in national tech coverage. Certainly nowhere near the backlash we've seen from this Facebook story.

A few years back, the Wheeler run FCC realized that giant broadband providers were most assuredly running off the rails in terms of consumer privacy, so they proposed some pretty basic privacy guidelines for ISPs. While ISPs whined incessantly about the "draconian" nature of the rules, the reality is they were relatively modest: requiring that ISPs simply be transparent about what consumer data was being collected or sold, and provide consumers with working opt out tools.

But the GOP and Trump administration quickly moved (at Comcast, Verizon and AT&T's lobbying behest) to gut those rules via the Congressional Review Act before they could take effect. And when states like California tried to pass some equally modest privacy guidelines for ISPs on the state level to fill the void, telecom duopolies worked hand in hand with Google and Facebook to kill the effort, falsely informing lawmakers that privacy safeguards would harm children, inundate the internet with popups (what?), and somehow aid extremism on the internet. You probably didn't see much tech press coverage of this, either.

So again, it makes perfect sense to be angry with Facebook. But if you're deleting Facebook to protect your privacy but still happily using your stock, bloatware-laden smartphone on one of these networks, you're just trying to towel off in a rainstorm. The reality is that apathy to consumer privacy issues is the norm across industries, not the exception, and however bad Facebook's behavior has been on the privacy front, the telecom industry has been decidedly worse for much, much longer. And whereas you can choose not to use Facebook, a lack of competition means you're stuck with your snoop-happy ISP.

We've collectively decided, repeatedly, that it's OK to sacrifice consumer privacy and control for fatter revenues, a concept perfected by the telecom sector, and the Congressional and regulatory lackeys paid to love and protect them from accountability and competition. So while it's wonderful that we're suddenly interested in having a widespread, intelligent conversation about privacy in the wake of the Facebook revelations, let's do so with the broader awareness that Facebook's casual treatment of consumer privacy is just the outer maw of a mammoth gullet of dysfunction.

Permalink | Comments | Email This Story
Categorieën: Technieuws

YouTuber Who Trained His Girlfriend's Dog To Be A Nazi Facing Hate Crime Charges In Scotland

wo, 03/21/2018 - 11:23

Across the sea in the UK, offensive speech is still getting people jailed. An obnoxious person who trained his girlfriend's dog to perform the Nazi salute and respond excitedly to the phrase "gas the Jews" is looking at possible jail time after posting these exploits to YouTube under the name Count Dankula. According to Scotland resident Markus Meechan, it was the "least cute" thing he could train his girlfriend's dog to do, apparently in response to her constant gushing about the dog's cuteness.

Meechan's video racked up 3 million views on YouTube, but it really didn't start making news until local police started paying attention.

That April, soon after the video was posted, police knocked on Meechan’s door in Coatbridge, a town in North Lanarkshire, Scotland, he told Alex Jones. The officers told him that he was being charged with a hate crime and that the video could be seen as promoting violence against Jews. They told him to change his clothes, took pictures of his apartment and hauled him off to jail.

There's is no doubt the video is offensive. But offended people have plenty of options to counter Meechan's speech with their own. Unfortunately, the 2003 law being used against him has ensured this counterspeech is solely taking the form of testimony against Meechan.

During the trial, Ephraim Borowski, director of the Scottish Council of Jewish Communities, who lost family members during the Holocaust, said the video was “grossly offensive. It stuns me that anyone should think it is a joke," he said, according to The Times.

"My immediate reaction is that there is a clear distinction to be made between an off-hand remark and the amount of effort that is required to train a dog like that. I actually feel sorry for the dog.”

Meechan says he has no hate for Jews and did it solely to annoy his girlfriend. It was recorded, which means it was meant to entertain YouTube users, some of which likely viewed the video as generally supportive of gassing Jews (which may have helpfully aligned with their own views on the subject). But speech can be offensive without being a hate crime, and the general criminalization of offensive subject matter isn't doing much to curb actual racially-motivated criminal activity. All it's really doing is ensuring UK courts receive a steady stream of defendants who've done nothing more dangerous than publicly display their questionable opinions and terrible senses of humor.

The YouTuber is now facing a year in prison because an unfunny prank came to the attention of local police. Prosecutors are busy trying to prove intent, which should be an uphill battle. Meechan has already issued a public apology, as well as a follow-up video further distancing his distasteful prank from any support for anti-Semitism. Nevertheless, prosecutors are alleging the sole reason for the recording was to cause fear and stir up hatred. That really doesn't seem to be the case despite several bigots deciding the video's release meant they should inundate the local Jewish community council with hateful messages.

Laws enforced in this fashion don't instill a greater respect for rule of law or those who craft bad laws with good intentions. Fifteen years have passed since this law took effect and it's certainly hasn't shown much return on investment. Instead of stomping out hate, it's being used to carve holes in speech protections, ensuring the merely offensive will be given the same punishments as those who actually incite hatred and violent acts.

Permalink | Comments | Email This Story
Categorieën: Technieuws

Photographer Tutorial Company Reacts To Pirates By Screwing With Them Hilariously

wo, 03/21/2018 - 03:33

When it comes to content producers reacting to the pirating of their works, we've seen just about every reaction possible. From costly lawsuits and copyright trolling, to attempts to engage with this untapped market, up to and including creatively messing with those that would commit copyright infringement. The last of those options doesn't do a great deal to generate sales revenue, but it can often be seen by the public as both a funny way to jerk around pirates and as a method for educating them on the needs of creators.

But Fstoppers, a site that produces high-end tutorials for photographers and sells them for hundreds of dollars each, may have taken the creativity to the next level to mess with those downloading illegitimate copies of their latest work. They decided to release a version of Photographing the World 3 on several torrent sites a few days before it went to retail, but the version they released was much different than the actual product. It was close enough to the real thing that many people were left wondering just what the hell was going on, but ridiculous enough that it's downright funny.

Where Fstoppers normally go to beautiful and exotic international locations, for their fake they decided to go to an Olive Garden in Charleston, South Carolina. Yet despite the clear change of location, they wanted people to believe the tutorial was legitimate.

“We wanted to ride this constant line of ‘Is this for real? Could this possibly be real? Is Elia [Locardi] joking right now? I don’t think he’s joking, he’s being totally serious’,” says Lee Morris, one of the co-owners of Fstoppers.

People really have to watch the tutorial to see what a fantastic job Fstoppers did in achieving that goal. For anyone unfamiliar with their work, the tutorial is initially hard to spot as a fake and even for veterans the level of ambiguity is really impressive.

Beyond the location choices, there are some dead giveaways hidden in subtle ways within the "tutorial." As an example, here is a scene from the tutorial in which Locardi is demonstrating how to for a 'mask' over one of the photos from Olive Garden.

If that looks like he's drawn a dick and balls over the photo on his computer screen, that's because that is exactly what he's done. The whole thing is a Onion-esque love letter to pirates, screwing with them for downloading the tutorial before the retail version was even available. By uploading this 25GB file to torrent sites, and going so far as to generate positive but fake reviews of the torrent, Fstoppers managed not only to generate hundreds of downloads of the fake tutorial, but its fake actually outpaced torrents of the real product. The whole thing was like a strange, funny honeypot. The fake apparently even resulted in complaints from pirates to Fstoppers about the quality of the fake product.

Also of interest is the feedback Fstoppers got following their special release. Emails flooded in from pirates, some of whom were confused while others were upset at the ‘quality’ of the tutorial.

“The whole time we were thinking: ‘This isn’t even on the market yet! You guys are totally stealing this and emailing us and complaining about it,” says Fstoppers co-owner Patrick Hall.

You have to admit, the whole thing is both creative and funny. Still, the obvious question that arises is whether all the time and effort that went into putting this together couldn't have been better spent figuring out a business model and method in which more of these pirates were flipped into paying customers rather than simply screwing with them.

Permalink | Comments | Email This Story
Categorieën: Technieuws

Tempe Police Chief Indicates The Uber Self-Driving Car Probably Isn't At Fault In Pedestrian Death

di, 03/20/2018 - 23:47

The internet ink has barely dried on Karl's post about an Uber self-driving vehicle striking and killing a pedestrian in Arizona, and we already have an indication from the authorities that the vehicle probably isn't to blame for the fatality. Because public relations waits for nobody, Uber suspended its autonomous vehicles in the wake of the death of a woman in Tempe, but that didn't keep fairly breathless headlines being painted all across the mainstream media. The stories that accompanied those headlines were more careful to mention that an investigation is required before anyone knows what actually happened, but the buzz created by the headlines wasn't so nuanced. I actually saw this in my own office, where several people could be heard mentioning that autonomous vehicles were now done.

But that was always silly. It's an awkward thing to say, but the fact that it took this long for AVs to strike and kill a pedestrian is a triumph of technology, given just how many people we humans kill with our cars. Hell, the Phoenix area itself had 11 pedestrian deaths by car in the last week, with only one of them being this Uber car incident. And now all of that hand-wringing is set to really look silly, as the Tempe police chief is indicating that no driver, human or AI, would likely have been able to prevent this death.

The chief of the Tempe Police has told the San Francisco Chronicle that Uber is likely not responsible for the Sunday evening crash that killed 49-year-old pedestrian Elaine Herzberg.

“I suspect preliminarily it appears that the Uber would likely not be at fault in this accident," said Chief Sylvia Moir.

Herzberg was "pushing a bicycle laden with plastic shopping bags," according to the Chronicle's Carolyn Said, when she "abruptly walked from a center median into a lane of traffic."

After viewing video captured by the Uber vehicle, Moir concluded that “it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway."

So, once again, this tragedy has almost nothing to do with automobile AI and everything to do with human beings being faulty, complicated creatures that make mistakes. We don't need to assign blame or fault to a woman who died to admit to ourselves that not only did the self-driving car do nothing wrong in this instance, but also that it might just be true to say that the car's AI had a far better chance of avoiding a fatality than the average human driver. The car was not speeding. It did not swerve. It did not adjust its speed prior to the collision.

This obviously isn't the conclusion of the police's investigation, but when the police chief is already making these sorts of noises early on, it's reasonable to conclude that the visual evidence of what happened is pretty clear. Sadly, all this likely means is that the major media websites of the world will have to bench their misleading headlines until the next death that may or may not be the fault of a self-driving vehicle.

Permalink | Comments | Email This Story
Categorieën: Technieuws

Techdirt Podcast Episode 159: What Does It Mean For Social Media To Be Held Accountable?

di, 03/20/2018 - 21:30

This isn't the first time we've discussed this on the podcast, and it probably won't be the last — disinformation online is a big and complicated topic, and there are a whole lot of angles to approach it from. This week, we're joined by Renee DiResta, who has been researching disinformation ever since the anti-vaxxer movement caught her attention, to discuss what exactly it means to say social media platforms should be held accountable.

Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.

Permalink | Comments | Email This Story
Categorieën: Technieuws

EU's Mandatory Copyright Content Filter Is The Zombie That Just Never Dies

di, 03/20/2018 - 19:58

For the past few years, there's been a dedicated effort by some to get mandatory filters into EU copyright rules, despite the fact that this would destroy smaller websites, wouldn't work very well, and would create all sorts of other consequences the EU doesn't want, including suppression of free speech. Each time it pops up again, a few people who actually understand these things have to waste a ridiculous amount of time lobbying folks in Brussels to explain to them how disastrous the plan will be, and they back down. And then, magically, it comes back again.

That appeared to happen again last week. EU Parliament Member Julia Reda called attention to this by pointing out that, despite a promise that mandatory filters would be dropped, they had suddenly come back:

A huge scandal is brewing in the European Parliament on the #CensorshipMachines. Last week, rapporteur @AxelVossMdEP had shown signs to drop the harmful filtering obligation. Now it’s back, and it’s worse than ever. Read it here: #FixCopyright

— Julia Reda (@Senficon) March 14, 2018

The draft of the proposal included a requirement that any site that doesn't have licensing agreements with rightsholders for any content on their site must take "appropriate and proportionate technical measures leading to the non-availability of copyright or related-right infringing works...." In other words, a mandatory filter.

Incredibly, as Reda discovered, despite the fact that this issue is now in the hands of the EU Parliament, rather than the EU Commission, the metadata on the draft rules showed it was created by the EU Commission. After meeting with the MEP who is in charge of this, Reda posted that that individual, Axel Voss, claimed it was a "mistake" to include the requirement for "technical measures" (uh, yeah, sure), but still plans to make platforms liable for any infringement on their platforms.

One of the many problems with this is that the people who demand these things tend to have little to no understanding of how the internet actually works. They get upset about finding some small amount of infringing content on a large internet platform (YouTube, Facebook, etc.) and demand mandatory filtering. Of course, both YouTube and Facebook already have expensive filters. But this impacts every other site as well -- sites that cannot afford such filtering.

Indeed, Github quickly published a blog post detailing how much harm this would do to its platform, which in turn would create a massive headache for open source software around the globe.

Upload filters (“censorship machines”) are one of the most controversial elements of the copyright proposal, raising a number of concerns, including:

  • Privacy: Upload filters are a form of surveillance, effectively a “general monitoring obligation” prohibited by EU law
  • Free speech: Requiring platforms to monitor content contradicts intermediary liability protections in EU law and creates incentives to remove content
  • Ineffectiveness: Content detection tools are flawed (generate false positives, don’t fit all kinds of content) and overly burdensome, especially for small and medium-sized businesses that might not be able to afford them or the resulting litigation

Upload filters are especially concerning for software developers given that:

  • Software developers create copyrightable works—their code—and those who choose an open source license want to allow that code to be shared
  • False positives (and negatives) are especially likely for software code because code often has many contributors and layers, often with different licensing for different components
  • Requiring code-hosting platforms to scan and automatically remove content could drastically impact software developers when their dependencies are removed due to false positives

A special site has been set up for people to let the EU Parliament know just how much damage this proposal would do to free and open source software.

Of course, the requirements would hit lots of other platforms as well. Given enough uproar, I imagine that they'll rewrite a few definitions just a bit to exempt Github. It appears that's what they did to deal with similar concerns about Wikipedia. But, that's no way to legislate. You don't just build in a list of exemptions as people point out to you how dumb your law is. You rethink the law.

Unfortunately, when it comes to this zombie censorship machine, it appears it's an issue that just won't die.

Permalink | Comments | Email This Story
Categorieën: Technieuws

DOJ Readying Warrants In Carter Page Investigation For Public Release

di, 03/20/2018 - 18:45

For the first time since the FISA court opened for national security business, the DOJ is considering declassifying FISA warrant applications. The documents are linked to the FBI's surveillance of former Trump campaign aide, Carter Page. Both sides of the political aisle have asked for these documents, which is something you'd think they'd have wanted to see before issuing their takes on perceived surveillance improprieties.

Devin Nunes -- following the release of his memo -- sent a letter to the FISA court asking it to clear the warrants for public release. The court's reply, penned by Judge Rosemary Collyer, pointed out two things. First, the FISA court had never released these documents publicly, nor was it in the best position to do so. It is only tasked with determining whether or not surveillance is warranted and to what restrictions it must adhere. It does not have the innate power to declassify documents, nor can it arbitrarily decide what documents have gathered enough public interest to outweigh the government's perpetual demands for secrecy.

The court did point out this release could be achieved much faster if Nunes directed his question to the DOJ, which does have the power to declassify its own investigation documents. It doesn't appear Devin Nunes has approached the DOJ but litigants in an FOIA lawsuit have, and they're looking at possibly obtaining the documents Devin Nunes requested from the FISA court.

The government is considering an unprecedented disclosure of parts of a controversial secret surveillance order that justified the monitoring of former Trump campaign aide Carter Page.

Responding to a legal challenge brought by USA TODAY and the James Madison Project, Justice Department lawyers Friday cast the ongoing review as “novel, complex and time-consuming.”

“The government has never, in any litigation civil or criminal, processed FISA (Foreign Intelligence Surveillance Act) applications for release to the public,” Justice lawyers wrote in a five-page filing.

The filing [PDF] notes that the President's unilateral decision (over the DOJ's objection) of releasing the Nunes memo has forced it to reverse some of its Glomar declarations in ongoing FOIA lawsuits, since it's impossible to maintain a stance of non-confirmation and non-denial when the White House is handing out confirmation left and right.

The DOJ has asked for four months to review the documents before returning to the court with its final answer on public release. There will probably be further delays from this point, as the DOJ will need the FISA court to officially unseal documents before it can turn these over to the litigants. It notes the plaintiffs are not happy with this timetable, but points to the presumed complexity of a task it has never undertaken previously as the reason it needs 120 days before the litigation can move forward.

This administration continues to break new grounds in inadvertent transparency. However, the release of these documents may further undermine the Nunes Memo narrative, so legislators and White House officials hellbent on using the FISA court to score political points should be careful what they wish for.

Permalink | Comments | Email This Story
Categorieën: Technieuws

Daily Deal: Pay What You Want 2018 Arduino Enthusiast E-Book Bundle

di, 03/20/2018 - 18:40

The 2018 Arduino Enthusiast E-Book Bundle contains 10 ebooks of project based instruction to help you master all things Arduino. Pay what you want and get the Arduino Computer Vision Programming ebook where you'll learn how to develop Arduino-supported computer vision systems that can interact with real-life by seeing it. Beat the average price ($10 at the time of writing) to unlock access to 9 more ebooks. You’ll learn to create your own wearable projects by mastering different electronic components, such as LEDs and sensors. Discover how to build projects that can move using DC motors, walk using servo motors, and avoid barriers using its sensors. From home automation to your own IoT projects and more, these books have you covered.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Permalink | Comments | Email This Story
Categorieën: Technieuws

Did Facebook Violate SESTA By Promoting Child Abuse Videos?

di, 03/20/2018 - 17:18

Facebook -- and Sheryl Sandberg in particular -- have been the most vocal supporters of SESTA. Sandberg wrote a bizarre Facebook post supporting the horrible SESTA/FOSTA Frankenstein bill the day it was voted on in the House. In it, she wrote:

I care deeply about this issue and I’m so thankful to all the advocates who are fighting tirelessly to make sure we put a stop to trafficking while helping victims get the support they need. Facebook is committed to working with them and with legislators in the House and Senate as the process moves forward to make sure we pass meaningful and strong legislation to stop sex trafficking.

Which is weird, given that the bill does nothing to actually stop sex trafficking, but it does place potentially criminal liability on any internet site that knowingly facilitates sex trafficking. Like... say... Facebook. You see, last week, there was a bit of a kerfuffle when Facebook suddenly started pushing horrific child abuse videos on its users.

The social network’s search suggestions, which are supposed to automatically offer the most popular search terms to users, apparently broke around 4am in the UK, and started to suggest unpleasant results for those who typed in “video of”.

Multiple users posted examples on Twitter, with the site proposing searches including “video of girl sucking dick under water”, “videos of sexuals” and “video of little girl giving oral”. Others reported similar results in other languages.

While Facebook has since apologized for this and claimed that it is committed to taking down such content, how hard would it be for someone to make a case that the company had just engaged in pushing child pornography on unsuspecting users, and there could be a credible claim that many of the videos involved victims of sex trafficking.

And, of course, this comes right after another possibly SESTA-violating fiasco at Facebook in which the company sent out a survey about whether the site should allow adult men to ask for sexual pictures of teenaged girls. No, really.

On Sunday, the social network ran a survey for some users asking how they thought the company should handle grooming behaviour. “There are a wide range of topics and behaviours that appear on Facebook,” one question began. “In thinking about an ideal world where you could set Facebook’s policies, how would you handle the following: a private message in which an adult man asks a 14-year-old girl for sexual pictures.”

The options available to respondents ranged from “this content should not be allowed on Facebook, and no one should be able to see it” to “this content should be allowed on Facebook, and I would not mind seeing it”.

A second question asked who should decide the rules around whether or not the adult man should be allowed to ask for such pictures on Facebook. Options available included “Facebook users decide the rules by voting and tell Facebook” and “Facebook decides the rules on its own”.

After this became public and people called it out, Facebook also claimed that this was "an error," but it seems like it wouldn't take a genius lawyer or prosecutor to argue that the company choosing to send out just such a survey shows it facilitating sex trafficking. I mean, it was directly asking if it should allow for the sort of activity directly involved in grooming victims for sex trafficking.

Oh, and remember, that even while this is blatantly unconstitutional, SESTA says the law applies retroactively -- meaning that even though all of this happened prior to SESTA becoming law, Facebook is potentially still quite guilty of violating the poorly drafted criminal law it is loudly supporting.

Permalink | Comments | Email This Story
Categorieën: Technieuws

The Cable Industry Is Quietly Securing A Massive Monopoly Over American Broadband

di, 03/20/2018 - 14:19

Cable providers like Comcast and Charter continue to quietly secure a growing monopoly over American broadband. A new report by Leichtman Research notes that the nation's biggest cable companies added a whopping 83% of all net broadband subscribers last quarter. All told, the nation's top cable companies (predominately Charter Spectrum and Comcast) added 2.7 million broadband subscribers in 2017, while the nation's telcos (AT&T, Verizon, CenturyLink, Frontier) saw a net loss of 625,000 subscribers last year, slightly worse than the 600,000 subscriber net loss they witness in 2016.

A pretty obvious pattern begins to emerge from Leichtman's data, and it's one of total and absolute cable industry dominance:

"The top broadband providers in the US added nearly 4.8 million net broadband subscribers over the past two years," said Bruce Leichtman, president and principal analyst for Leichtman Research Group, Inc. "The top cable companies accounted for 130% of the net broadband additions in 2017, following 122% of the net adds in 2016."

Oddly Leichtman can't be bothered to explain why the cable industry has become so dominant: a total refusal by the nation's phone companies to upgrade their networks at any real scale. Verizon years ago decided that residential broadband wasn't profitable enough quickly enough, so it froze its FiOS fiber deployments to instead focus on flinging video advertisements at Millennials.

You'll note from the chart above that the only telcos still adding subscribers are those that are actually tryiing to upgrade to fiber to the home (AT&T, Cincinnati Bell). Even then, while AT&T is upgrading some areas to fiber, actual availability remains spotty as the company largely focuses on developments and college campuses where costs are minimal. There's still millions of customers in AT&T territories stuck on DSL straight out of 2003, and they won't be getting upgraded anytime soon.

Other American telcos, like Frontier, Windstream and Centurylink, have effectively refused to upgrade aging DSL lines at any real scale, meaning they're incapable of even offering the FCC's base definition of broadband (25 Mbps down, 3 Mbps up) across huge swaths of America. Frontier in particular has been a bit of a shitshow if you've followed the often comic corruption and cronyism they've fostered in states like West Virginia. Other telcos (like CenturyLink) now don't see residential broadband as worth their time, so they've shifted much of their focus to enterprise services or the acquisition of transit operators like Level 3.

The result is a massive gap between the broadband haves and the broadband have nots, especially in rural markets and second and third tier cities these companies no longer deem worthy of upgrading (they will, however, back awful protectionist state laws banning towns and cities from serving themselves, even when no incumbent ISP wants to).

This is all wonderful news for natural monopolies like Comcast, who now face less competitive pressure than ever. That means a reduced incentive to lower prices or shore up what's widely considered some of the worst customer service in any industry in America. It also opens the door wider to their dream of inundating American consumers with arbitrary and unnecessary usage caps, which not only drive up the cost of broadband service, but make shifting to streaming cable alternatives more costly and cumbersome.

While many people like to argue that wireless (especially fifth generation, or 5G) will come in and save us all with an additional layer of competition, that ignores the fact that wireless backhaul services remain dominated by just a few monopolies as well, ensuring competition there too remains tepid and often theatrical in nature. And with many cable providers now striking bundling partnerships with wireless carriers, the incentive to actually compete with one another remains notably muted, as nobody in the sector wants an actual price war.

This of course is all occurring while the Trump administration attempts to gut most meaningful oversight of the uncompetitive broadband sector, meaning neither competition nor adult regulatory supervision will be forcing improvement any time soon. With the death of net neutrality and broadband privacy protections opening the door to even more anti-competitive behavior than we've grown accustomed to, what could possibly go wrong?

Permalink | Comments | Email This Story
Categorieën: Technieuws

Cops Wanting To Track Movements Of Hundreds Of People Are Turning To Google For Location Records

di, 03/20/2018 - 11:21

Police in Raleigh, North Carolina are using Google as a proxy surveillance dragnet. This likely isn't limited to Raleigh. Google harvests an astounding amount of data from users, but what seems to be of most interest to law enforcement is location info.

In at least four investigations last year – cases of murder, sexual battery and even possible arson at the massive downtown fire in March 2017 – Raleigh police used search warrants to demand Google accounts not of specific suspects, but from any mobile devices that veered too close to the scene of a crime, according to a WRAL News review of court records. These warrants often prevent the technology giant for months from disclosing information about the searches not just to potential suspects, but to any users swept up in the search.

The good news is there are warrants in play. This likely isn't due to the PD's interest in "balancing civil rights with public safety," no matter what the government's frontmouths have stated to the press. The WRAL report includes a warrant request containing a footnote indicating Google pushed back when the cops showed up with a subpoena demanding info on everyone who had been in the vicinity of a crime scene.

The State of North Carolina maintains that the information sought herein consists entirely of "record[s]" other than the "content of communications…" Such records, require only a showing that there are reasonable grounds to believe the information sought is relevant and material to an ongoing criminal investigation… Despite this, Google has indicated that it believes a search warrant, and not a court order, is required to obtain the location data sought in this application. Although the Government disagrees with Google's position, because there is probable cause to believe the information sought herein will contain evidence of the criminal offenses specified in this affidavit, the Government is seeking a search warrant in this case.

But the bad news is that warrants are supposed to be tailored to specific searches related to specific suspects. These warrants are allowing law enforcement to obtain information on dozens or hundreds of people entirely unrelated to the criminal act being investigated, other than their proximity to the crime scene.

At least 19 search warrants issued by law enforcement in Wake County since 2015 targeted specific Google accounts of known suspects, court documents show.

But the March search warrants in the two homicide cases are after something different.

The demands Raleigh police issued for Google data described a 17-acre area that included both homes and businesses. In the Efobi homicide case, the cordon included dozens of units in the Washington Terrace complex near St. Augustine's University.

Cellphones -- and any other devices using Google products that serve up location info -- are steady generators of third party records. Conceivably, this puts a wealth of location info only a subpoena's-length away from the government. It appears Google is pushing back, but that tactic isn't going to work in every case. The Raleigh PD may have been willing to oblige Google to avoid a fight in court (and the risk of handing over information about how often it approaches third parties for records and what it demands from them), but not every PD making use of Google's location info stash is going to back down this easily.

Other warrants obtained by WRAL show local law enforcement has collected phone info and location data on thousands of people while investigating such crimes as arson and sexual battery. Despite having no evidence showing the perpetrators of these crimes even had a cellphone in their possession at the time the incidents occurred, agencies approached Google with judge-approved warrants to collect data on people living in nearby apartment units or visiting local businesses near the crime scene.

The government's attorneys believe everything is perfectly fine because no communications or other content is collected. But defense attorneys and rights advocates point out these warrants approach civil liberties from entirely the wrong direction, making everyone in the area a suspect before trimming down the list. In one way, it's a lot like canvassing a neighborhood looking for a suspect. But this analogy only holds if everyone officers approach is viewed as a suspect. This isn't the case in normal, non-Google-based investigations.

After five years as a Wake County prosecutor, Raleigh defense attorney Steven Saad said he's familiar with police demands for Google account data or cell tower records on a named suspect. But these area-based search warrants were new to him.

"This is almost the opposite technique, where they get a search warrant in the hopes of finding somebody later to follow or investigate," Saad said. "It's really hard to say that complies with most of the search warrant or probable cause rules that we've got around the country."

The Wake County District Attorney believes these warrants are solid enough to survive a challenge in court. The government may get its wish. Officers using these to obtain data will likely come out alright, thanks to good faith reliance on approved warrants, but the magistrates approving dragnet collection as an initial investigatory step may have some explaining to do.

The DA says it's just like watching business surveillance video when investigating a robbery. Lots of non-criminals will be captured on tape and their movements observed by officers. But the comparison fails because, in most cases, the criminal act will be caught on tape, limiting further investigation to the perpetrators seen by officers. In these cases, everyone with a cellphone within a certain distance of crime scene becomes a suspect whose movements are tracked post facto by officers who have no idea -- much less actual probable cause to believe -- that any of this data actually points to the suspect they're seeking.

Then there's the problem of effectiveness. Starting an investigation with a geofence and attempting to turn it into a bottleneck that will result in pursuable suspects doesn't seem to be working. According to the documents seen by WRAL, only one person has been arrested for any of the crimes in which police approached Google for data on thousands of users. And in that case, the location data law enforcement requested didn't show up until months after the suspect was charged. It may be there have been more success stories, but routine sealing of documents and warrants tends to make it impossible for anyone outside of the police department to know for sure. But the department knows. And if it has good things to say about this questionable search technique, it hasn't offered to share them with the rest of the public.

Permalink | Comments | Email This Story
Categorieën: Technieuws

Crowdfunded OpenSCHUFA Project Wants To Reverse-Engineer Germany's Main Credit-Scoring Algorithm

di, 03/20/2018 - 03:47

We've just written about calls for a key legal communications system to be open-sourced as a way of re-building confidence in a project that has been plagued by problems. In many ways, it's surprising that these moves aren't more common. Without transparency, there can be little trust that a system is working as claimed. In the past this was just about software, but today there's another aspect to the problem. As well as the code itself, there are the increasingly-complex algorithms, which the software implements. There is a growing realization that algorithms are ruling important parts of our lives without any public knowledge of how they work or make decisions about us. In Germany, for example, one of the most important algorithms determines a person's SCHUFA credit rating: the name comes from an abbreviation of its German "Schutzorganisation für Allgemeine Kreditsicherung", which means "Protection Agency for General Credit Security". As a site called Algorithm Watch explains:

SCHUFA holds data on round about 70 million people in Germany. That's nearly everyone in the country aged 18 or older. According to SCHUFA, nearly one in ten of these people living in Germany (some 7 million people) have negative entries in their record. That's a lot!

SCHUFA gets its data from some 9,000 partners, such as banks and telecommunication companies. Incredibly, SCHUFA doesn't believe it has a responsibility to check the accuracy of data it receives from its partners.

In addition, the algorithm used by SCHUFA to calculate credit scores is protected as a trade secret so no one knows how the algorithm works and whether there are errors or injustices built into the model or the software.

So basically, if you are an adult living in Germany, it's a good chance your financial life is affected by a credit score produced by a multimillion euro private company using an automatic process that they do not have to explain and an algorithm based on data that nobody checks for inaccuracies.

A new crowd-sourced project called OpenSCHUFA aims to change that. It's being run by Algorithm Watch and Open Knowledge Foundation Germany (full disclosure: I am an unpaid member of the Open Knowledge International Advisory Council). As well as asking people for monetary support, OpenSCHUFA wants German citizens to request a copy of their credit record, which they can obtain free of charge from SCHUFA. People can then send the main results -- not the full record, and with identifiers removed -- to OpenSCHUFA. The project will use the data to try to understand what real-life variables produce good and bad credit scores when fed into the SCHUFA system. Ultimately, the hope is that it will be possible to model, perhaps even reverse-engineer, the underlying algorithm.

This is an important attempt to pry open one of the major black boxes that are starting to rule our lives. Whether or not it manages to understand the SCHUFA algorithm, the exercise will provide useful experience for other projects to build on in the future. And if you are wondering whether it's worth expending all this money and effort, look no further than SCHUFA's response to the initiative, reported here by (original in German):

SCHUFA considers the project as clearly directed against the overarching interests of the economy, society and the world of business in Germany.

The fact that SCHUFA apparently doesn't want people to know how its algorithm works is a pretty good reason for trying to find out.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

Permalink | Comments | Email This Story
Categorieën: Technieuws

As Video Games Are In Presidential Crosshairs, New Study Again Shows They Don't Affect Behavior

ma, 03/19/2018 - 23:30

Violent video games have once again found themselves in the role of scapegoat after a recent spate of gun violence in America. After the Florida school shooting, and in the extended wake of the massacre in Las Vegas, several government representatives at various levels have leveled their ire at violent games, including Trump, who commissioned an insane sit-down to act as moderator between game company executives and those that blame them for all the world's ills. Amid this deluge of distraction, it would be easy to forget that study after study after study have detailed how bunk the notion is that you can tie real-world violence and violent games is. Not to mention, of course, that there has never been more people playing more violent video games in the history of the world than at this moment right now, and at the same time research shows a declining trend for deviant behavior in teens rather than any sort of upswing.

But a recent study conducted by the Max Planck Institute and published in Molecular Psychiatry further demonstrates the point that violence and games are not connected, with a specific methodology that carries a great deal of weight. The purpose of the study was to move beyond measuring behavior effects immediately after short, unsustained bursts of game-playing and into the realm of the effects on sustained, regular consumption of violent video games.

To correct for the "priming" effects inherent in these other studies, researchers had 90 adult participants play either Grand Theft Auto V or The Sims 3 for at least 30 minutes every day over eight weeks (a control group played no games during the testing period). The adults chosen, who ranged from 18 to 45 years old, reported little to no video game play in the previous six months and were screened for pre-existing psychological problems before the tests.

The participants were subjected to a wide battery of 52 established questionnaires intended to measure "aggression, sexist attitudes, empathy, and interpersonal competencies, impulsivity-related constructs (such as sensation seeking, boredom proneness, risk-taking, delay discounting), mental health (depressivity, anxiety) as well as executive control functions." The tests were administered immediately before and immediately after the two-month gameplay period and also two months afterward, in order to measure potential continuing effects.

Participants in the experimental groups were playing GTA, The Sims, or no games at all, and the before and after tests demonstrated three significant behavior changes among all participants. That equates to less than 10% of the survey results indicating any significant change. As the Ars post points out, you would expect at least 10% to show significant change just by random chance. Going through the data and the near complete dearth of any significant behavior changes, the study fairly boldly concludes that there were "no detrimental effects of violent video game play" among the participants.

Were this a fair and just world, this study would be seen as merely confirming what our common sense observations tell us: playing violent games doesn't make someone violent in real life. After all, were that not true, we would see violence rising commensurate with the availability of violent games across a collection of global societies. That simply isn't happening.

So, as America tries to work out its mass-shooting problem, one thing should be clear: whatever list you have in your head about what to blame for the violence, we should be taking video games off of that list.

Permalink | Comments | Email This Story
Categorieën: Technieuws