« prev   random   next »

4
0

Germany Restricts Facebook’s Data Gathering

By Patrick following x   2019 Feb 7, 8:20am 566 views   9 comments   watch   nsfw   quote     share    


https://www.nytimes.com/2019/02/07/technology/germany-facebook-data.html

The agency said Facebook had exploited its dominant position in Germany by presenting people with an all-or-nothing choice: submit to unlimited data collection by the company or simply not use the service. The practice, the agency said, had enabled Facebook to collect data about its users’ activities on millions of other websites, helping the social network become a worldwide powerhouse of personalized advertising.

The competition regulator ruled that Facebook would now have to obtain users’ permission before merging data from other sites. The company is also prohibited from combining information from users’ Facebook accounts with data from their accounts on Facebook-owned services like Instagram and What’s App without the users’ permission.

“In future, Facebook will no longer be allowed to force its users to agree to the practically unrestricted collection and assigning of non-Facebook data to their Facebook user accounts,” Andreas Mundt, president of the competition authority, the Federal Cartel Office, said in a statement on Thursday.

“The combination of data sources,” the cartel authority said, “substantially contributed to the fact that Facebook was able to build a unique database for each individual user and thus to gain market power.”
1   kt1652   ignore (1)   2019 Feb 7, 9:12am   ↑ like (0)   ↓ dislike (0)   quote   flag        

FB is an addiction for dumb people, who need validation and approval.
Smart people think independently, dont need a lot of friends who are not.
Zuckerberg was brilliant for realizing there are more dumbs than smarts.

I'm like bitch, who is your mans?, aye
Can't keep my dick in my pants, aye
My bitch don't love me no more, aye
She kick me out I'm like vro, aye
That bitch don't wanna be friends, aye
I gave her dick, she got mad, aye
She put her tongue on my dick, aye
Look at my wrist, about 10, aye
Just got a pound of that boof, aye
Brought that shit straight to the booth, aye
Tommy my Hilfiger boots, aye
She said want fuck bitch, I do, aye
You put a gun on my mans, aye
I put a hole in your parents, aye
I just got lean on my ksubis, aye
I got an uzi no uzi, aye
Fuck on…
2   HEYYOU   ignore (31)   2019 Feb 7, 10:30am   ↑ like (0)   ↓ dislike (0)   quote   flag        

I love Zuckerburg! He's a an Unregulated Free Market job creating, entrepreneur.
Why are the poor,with less than Zuckerburg's wealth,so jealous?
What excuse can people come up with for their failures?
In America Zuck is a capitalist winner! Take the fools data & get rich!
3   zzyzzx   ignore (1)   2019 Feb 7, 10:45am   ↑ like (0)   ↓ dislike (0)   quote   flag        

#fuckfacebook
4   anonymous   ignore (null)   2019 Feb 13, 12:18am   ↑ like (1)   ↓ dislike (0)   quote   flag        

Most Americans don’t realize what companies can predict from their data

Sixty-seven percent of smartphone users rely on Google Maps to help them get to where they are going quickly and efficiently.

A major of feature of Google Maps is its ability to predict how long different navigation routes will take. That’s possible because the mobile phone of each person using Google Maps sends data about its location and speed back to Google’s servers, where it is analyzed to generate new data about traffic conditions.

Information like this is useful for navigation. But the exact same data that is used to predict traffic patterns can also be used to predict other kinds of information – information people might not be comfortable with revealing.

For example, data about a mobile phone’s past location and movement patterns can be used to predict where a person lives, who their employer is, where they attend religious services and the age range of their children based on where they drop them off for school.

These predictions label who you are as a person and guess what you’re likely to do in the future. Research shows that people are largely unaware that these predictions are possible, and, if they do become aware of it, don’t like it. In my view, as someone who studies how predictive algorithms affect people’s privacy, that is a major problem for digital privacy in the U.S.

How is this all possible?

Every device that you use, every company you do business with, every online account you create or loyalty program you join, and even the government itself collects data about you.

The kinds of data they collect include things like your name, address, age, Social Security or driver’s license number, purchase transaction history, web browsing activity, voter registration information, whether you have children living with you or speak a foreign language, the photos you have posted to social media, the listing price of your home, whether you’ve recently had a life event like getting married, your credit score, what kind of car you drive, how much you spend on groceries, how much credit card debt you have and the location history from your mobile phone.

https://www.ftc.gov/system/files/documents/reports/data-brokers-call-transparency-accountability-report-federal-trade-commission-may-2014/140527databrokerreport.pdf

It doesn’t matter if these datasets were collected separately by different sources and don’t contain your name. It’s still easy to match them up according to other information about you that they contain.

For example, there are identifiers in public records databases, like your name and home address, that can be matched up with GPS location data from an app on your mobile phone. This allows a third party to link your home address with the location where you spend most of your evening and nighttime hours – presumably where you live. This means the app developer and its partners have access to your name, even if you didn’t directly give it to them.

In the U.S., the companies and platforms you interact with own the data they collect about you. This means they can legally sell this information to data brokers.

Data brokers are companies that are in the business of buying and selling datasets from a wide range of sources, including location data from many mobile phone carriers. Data brokers combine data to create detailed profiles of individual people, which they sell to other companies.

Combined datasets like this can be used to predict what you’ll want to buy in order to target ads. For example, a company that has purchased data about you can do things like connect your social media accounts and web browsing history with the route you take when you’re running errands and your purchase history at your local grocery store.

Employers use large datasets and predictive algorithms to make decisions about who to interview for jobs and predict who might quit. Police departments make lists of people who may be more likely to commit violent crimes. FICO, the same company that calculates credit scores, also calculates a “medication adherence score” that predicts who will stop taking their prescription medications.

How aware are people about this?

Even though people may be aware that their mobile phones have GPS and that their name and address are in a public records database somewhere, it’s far less likely that they realize how their data can be combined to make new predictions. That’s because privacy policies typically only include vague language about how data that’s collected will be used.

In a January survey, the Pew Internet and American Life project asked adult Facebook users in the U.S. about the predictions that Facebook makes about their personal traits, based on data collected by the platform and its partners. For example, Facebook assigns a “multicultural affinity” category to some users, guessing how similar they are to people from different race or ethnic backgrounds. This information is used to target ads.

The survey found that 74 percent of people did not know about these predictions. About half said they are not comfortable with Facebook predicting information like this.

In my research, I’ve found that people are only aware of predictions that are shown to them in an app’s user interface, and that makes sense given the reason they decided to use the app. For example, a 2017 study of fitness tracker users showed that people are aware that their tracker device collects their GPS location when they are exercising. But this doesn’t translate into awareness that the activity tracker company can predict where they live.

In another study, I found that Google Search users know that Google collects data about their search history, and Facebook users are aware that Facebook knows who their friends are. But people don’t know that their Facebook “likes” can be used to accurately predict their political party affiliation or sexual orientation.

http://www.pewinternet.org/2019/01/16/facebook-algorithms-and-personal-data/

What can be done about this?

Today’s internet largely relies on people managing their own digital privacy.

Companies ask people up front to consent to systems that collect data and make predictions about them. This approach would work well for managing privacy, if people refused to use services that have privacy policies they don’t like, and if companies wouldn’t violate their own privacy policies.

But research shows that nobody reads or understands those privacy policies. And, even when companies face consequences for breaking their privacy promises, it doesn’t stop them from doing it again.

Requiring users to consent without understanding how their data will be used also allows companies to shift the blame onto the user. If a user starts to feel like their data is being used in a way that they’re not actually comfortable with, they don’t have room to complain, because they consented, right?

In my view, there is no realistic way for users to be aware of the kinds of predictions that are possible. People naturally expect companies to use their data only in ways that are related to the reasons they had for interacting with the company or app in the first place. But companies usually aren’t legally required to restrict the ways they use people’s data to only things that users would expect.

One exception is Germany, where the Federal Cartel Office ruled on Feb. 7 that Facebook must specifically ask its users for permission to combine data collected about them on Facebook with data collected from third parties. The ruling also states that if people do not give their permission for this, they should still be able to use Facebook.

I believe that the U.S. needs stronger privacy-related regulation, so that companies will be more transparent and accountable to users about not just the data they collect, but also the kinds of predictions they’re generating by combining data from multiple sources.

http://theconversation.com/most-americans-dont-realize-what-companies-can-predict-from-their-data-110760
5   anonymous   ignore (null)   2019 Feb 17, 9:35am   ↑ like (0)   ↓ dislike (0)   quote   flag        

Two words in Facebook's latest regulatory filing shows how worried the company is about what it's doing to people.

Facebook CEO Mark Zuckerberg has talked a lot about making sure that users' time on his social network is "time well spent."

After a year of headlines blasting Facebook for negatively affecting everything from mental health to memories, Zuckerberg was responding — and, he said, showing responsibility— to growing public concerns about the age of social media.

But as Facebook revealed in a tiny but telling change to its latest quarterly report, the company also appreciates the very real threat these concerns pose to its business.

"Any number of factors could potentially negatively affect user retention, growth, and engagement," Facebook explains in the section of its 10K report devoted to risks related to its business. If, for example:

— "there are decreases in user sentiment due to questions about the quality or usefulness of our products or our user data practices, or concerns related to privacy and sharing, safety, security, well-being, or other factors;"

We bolded "well-being" to highlight the two words because they were not included in the same boilerplate sentence in the report released three months earlier. Go ahead, check for yourself.

Sure, regulatory filings to the SEC are kitchen-sink exercises, with every potential risk a corporate attorney can dream up explicitly spelled out. The company isn't saying it expects any of these risks to actually occur in the near future; it just wants to be able to say it warned you they might occur in case you ever decided it might be a good idea to sue the company.

That said, Facebook never thought its impact on people's well-being was a notable risk before. To the contrary, the company couldn't stop bragging about its altruistic "social mission."

Remember Zuckerberg's letter to shareholders in its IPO prospectus. Here's an excerpt, with emphasis his:

"We hope to strengthen how people relate to each other.

Even if our mission sounds big, it starts small — with the relationship between two people.

Personal relationships are the fundamental unit of our society. Relationships are how we discover new ideas, understand our world and ultimately derive long-term happiness."

It's been seven years since Zuckerberg wrote those words, and 15 years since the social network was created. A lot has changed in that time. But sometimes two small words buried in a dense regulatory filing say how much has changed better than anything.

https://www.businessinsider.com/facebook-change-sec-filing-well-being-users-2019-2
6   Ceffer   ignore (2)   2019 Feb 17, 11:20am   ↑ like (1)   ↓ dislike (0)   quote   flag        

Facebook will no longer read and translate the tattoos on your genital selfys.
7   anonymous   ignore (null)   2019 Feb 18, 1:10am   ↑ like (0)   ↓ dislike (0)   quote   flag        

Behold, the Facebook phishing scam that could dupe even vigilant users - HTML block almost perfectly reproduces Facebook single sign-on Window.

Phishers are deploying what appears to be a clever new trick to snag people’s Facebook passwords by presenting convincing replicas of single sign-on login windows on malicious sites, researchers said this week.

Single sign-on, or SSO, is a feature that allows people to use their accounts on other sites—typically Facebook, Google, LinkedIn, or Twitter—to log in to third-party websites. SSO is designed to make things easier for both end users and websites. Rather than having to create and remember a password for hundreds or even thousands of third-party sites, people can log in using the credentials for a single site. Websites that don’t want to bother creating and securing password-based authentication systems need only access an easy-to-use programming interface. Security and cryptographic mechanisms under the hood allow the the login to happen without the third party site ever seeing the username password.

Researchers with password manager service Myki recently found a site that purported to offer SSO from Facebook. As the video below shows, the login window looked almost identical to the real Facebook SSO. This one, however, didn’t run on the Facebook API and didn’t interface with the social network in any way. Instead, it phished the username and password.

Just add HTML

One of the ingredients that made the login window look so real is that it almost perfectly reproduced what users would see if they were encountering a genuine Facebook SSO, such as the one to the right of this text. The status bar, navigation bar, shadows, and HTTPS-based Facebook address all appear almost exactly the same. The window presented on the phishing page, however, was rendered using a block of HTML, rather than by calling an API that opens a real Facebook window. As a result, anything typed into the fake SSO page was funneled directly to the phishers.

While the replica is convincing, there was one easy way any user could immediately tell it was a fake. Genuine SSOs from Facebook and Google can be dragged outside of the window of the third-party site without any part of the login prompt disappearing. Portions of the fake SSO, by contrast, disappeared when doing this. Another tell-tale sign for Myki users, and likely users of other password managers, was that the autofill feature of the password manager didn’t work, since contrary to the address showing in the HTML block, the actual URL the users were visiting wasn’t from Facebook. More advanced users almost certainly could have spotted the forgery by viewing the source code of the site they were visiting, too.

The convincing forgery is yet another reminder that attacks only get better. It also reaffirms the value of using multi-factor authentication on any site that offers it. A password phished from a Facebook account that used MFA protection would have been of little use to attackers since they wouldn’t have had the physical key or smartphone that’s required when logging in from a computer that has never accessed the account before. Facebook has more tips for dealing with phishing here.

https://arstechnica.com/information-technology/2019/02/behold-the-facebook-phishing-scam-that-could-dupe-even-vigilant-users/
8   anonymous   ignore (null)   2019 Feb 21, 3:35pm   ↑ like (0)   ↓ dislike (0)   quote   flag        

Facebook has a terrorism problem in the Philippines - Asia Foundation-Rappler study shows how Islamic State-aligned groups use Facebook to recruit new members to their extremist cause

Fulan was approached over his social media posts indicating his devout Islamic beliefs. Aboud was targeted because of his online presence as a Muslim student leader in his local community.

Both young Filipinos, resident in the Philippines’ restive southern Mindanao region, were found and contacted over Facebook Messenger by anonymous Islamic State (ISIS) recruiters. While neither ultimately joined ISIS’ extremist cause, it’s unclear how many Filipinos have been recruited by the terror group’s tech-savvy efforts to connect with a new generation of potential jihadists.

Both Fulan and Aboud featured in a new study released by the Asia Foundation, a US-based think tank, and Rappler, a local online media outlet, that shows how ISIS is using Facebook to spread propaganda and bolster its militant ranks in Mindanao.

The study, entitled “Understanding Violent Extremism: Messaging and Recruitment Strategies on Social Media in the Philippines,” says the vast majority of extremist online activities are “opportunistic and unsophisticated” and that “the scope for online radicalization and recruitment follows pathways already identified as being influential in the Philippines.”

That entails highly localized messaging that touches on local grievances, often in dialects that allow for the publication and dissemination of extremist content that is not readily or easily understood by wider audiences, including by law enforcement agencies.

While extremist posts in English and Tagalog are easier for authorities to track and delete, messages in local Moro dialects such as Maranao, Maguindanaoan and Tausug often slips through filters and other detection mechanisms that Facebook uses to screen objectionable content.

“Facebook is almost the exclusive theater in the Philippines through which extremist actors are able to grab the attention of local audiences and engage in dialogue with persons they’re seeking to influence,” the research says.

With as many as 60 million monthly Facebook users, the Philippines is frequently cited as the “social media capital of the world.” Facebook’s popularity in the Philippines, witnessed in over one billion total visits per month in late 2017, can be attributed in part to the fact that mobile phone users can access the platform even without paying for mobile data.

Facebook’s Audience Insights dashboard estimates as many as 10 million users in Mindanao. That penetration contributed to the fast and wide spread of highly viral extremist propaganda during the five-month siege of Mindanao’s Marawi City by ISIS-aligned Filipino militant groups in 2017.

Pro-ISIS groups used open social media not only to disseminate propaganda but also to contact ISIS in Syria and Iraq, the Asia Foundation-Rappler research shows.

Once connected, the groups often shifted to encrypted private conversations on secure messaging services such as Facebook’s WhatsApp. They later used more secure messaging services such as Telegram, according to the research.

When the Marawi siege started in May 2017, Telegram provided enough security and features to allow violent extremist groups one-way broadcasts that reached up to 10,000 viewers, the research found.

The Marawi siege also brought together a mix of computer-savvy college recruits from university campuses in Mindanao, including through Muslim student organizations and their alumni at Catholic institutions as well as at state universities and polytechnic institutes.

The siege uprooted over 350,000 civilians and left the core of the country’s only Islamic city in shambles. At least 1,100 people, mostly Islamic militants, were killed in the urban warfare operation that took a page from ISIS’ conflicts in the Middle East.

When the siege started, Facebook was flooded with blurry images and videos of cloaked men carrying ISIS’ black flag well before mainstream media networks reported that state security forces and Islamist gunmen had clashed in Marawi’s main business district.

The ISIS-linked Maute Group also used Facebook to post video of a Catholic priest, Teresito Soganub, who it had taken hostage and who called on Duterte from captivity after the first clashes erupted to stop the military offensives and pull Filipino troops from the city.

“Do not use violence, because your enemies, they are ready to die for their religion. They are ready to die that their laws will be followed,” the priest said in a video addressed to President Rodrigo Duterte that Facebook eventually took down.

The research found that the spread of those viral materials has “diminished” since the siege ended, perhaps due to the killing or capture of those who disseminated the content, but that the existence of private networks means official efforts to eradicate violent extremism online will have only a “limited effect.”

That’s in part because online recruitment tactics have until now been poorly understood.

The Asia Foundation-Rappler study maps how ten ISIS-inspired Filipino militant groups recruit new adherents, starting with online offers of Arabic language lessons, to religious education, to indoctrination in violence to financial inducements to guerrilla training and swearing of allegiance.

The study emphasizes how social media “fuels an environment where offline worlds get reinforced online”, including through “closed special-interest groups to spam group members, drawing on limited shared connections between the recruiter and their target.”



So far, the government has reached for crude levers to repress ISIS’ recruitment. Duterte imposed martial law hours after the ISIS-aligned Abu Sayyaf and Maute groups, aided by foreign jihadists mostly from neighboring Malaysia and Indonesia, attacked Marawi in a bid to establish a wilayat, or Islamic State province.

The Abu Sayyaf, Maute Group, Bangsamoro Islamic Freedom Fighters and Ansar Al-Khilafa Philippines – all of which have pledged allegiance to Islamic State – continue to pose security threats in Mindanao and nationwide, according to the Philippine military.

These militant groups are now all bidding to rebuild their forces after sustaining heavy losses in the battle for Marawi and subsequent military operations on their bases, including in Maguindanao, Lanao de Sur, Basilan and Sulu provinces.

While less coherent than ISIS propaganda in the Middle East, catch phrases used in the Philippines to attract recruits include “widespread vulnerability, economic desperation, ineffective governance and ethnic marginalization” of the country’s minority Muslim population in the south, the research shows.

Nathan Shea, a senior program officer at the Asia Foundation’s Conflict and Fragility Program, said that simply removing offensive or extremist Facebook content will not be enough to stop the messaging and recruitment.

“Even when the original post is deleted, extremist messages and content can continue to be shared,” Shea wrote in the Asia Foundation’s weekly InAsia blog. “Meanwhile, those whose posts are censored or deleted may become isolated from more positive communities and begin to conduct their online activities in a secretive manner.”

https://www.asiatimes.com/2019/02/article/facebook-has-a-terrorism-problem-in-the-philippines/
9   OccasionalCortex   ignore (3)   2019 Feb 21, 4:35pm   ↑ like (0)   ↓ dislike (0)   quote   flag        

Ceffer says
Facebook will no longer read and translate the tattoos on your genital selfys.


Damn that was just too good!

about   best comments   contact   one year ago   suggestions