George Zimmerman Retweets Image of Trayvon Martin's Body & Makes Racist Tweets


 Share

Recommended Posts

thomastmc

zimmermantweet.jpg?quality=65&strip=all&

George Zimmerman, who was acquitted in the 2012 shooting death of Florida teen Trayvon Martin, has again sparked outrage after he retweeted a photo of Martin’s body.

The tweet, which can be seen above, has since been deleted by Twitter. It was from a Zimmerman fan, who said, “Z-man is a one man army.” The anger directed toward Zimmerman hasn’t seemed to bother him.

Zimmerman, 31, has not shied away from the spotlight since he was found not guilty of second-degree murder in the Martin case. Last month he teamed up with an anti-Muslim Florida gun shop owner to raise money by painting a Confederate flag.

Zimmerman has also been arrested on domestic violence charges, and was shot during a road rage incident earlier this year.

His tweets from the account @TherealGeorgeZ have caused angry reactions in the past.

http://heavy.com/news/2015/09/george-zimmerman-retweets-tweets-trayvon-martin-body-picture-photo-twitter-one-man-army-serious-slav-deleted/

---------------------------------------------------------------------------------------

George Zimmerman uses a Confederate flag as his user image (obviously because of his pride in Southern American history) that says "The 2nd protects the 1st" to post images reflecting his opinions on Obama's race baiting (ironic), to call people he obviously believes to be black "apes", to post racist images relating to his killing of Martin that contain "White Lives Matter" as a slogan, and to proclaim that it took a "WHITE" man with balls to try to assassinate him.

He is obviously unashamed and actually proud to have chased down a 17 year old young man through his own neighborhood in the rain and killed him as well.

Zimmerman tweets today:

CQAV66EUEAAm10r.jpg

"As much as I love owning all you trolls I have to work... On my tan! Tell "Karma" she's worthless, God protects me." https://twitter.com/TherealGeorgeZ/status/648542454413705216

 

Link to post
Share on other sites

  • 3 weeks later...
The Rev

It's like OJ Simpson all over again....  I bet the jurors are totally regretting their votes now.

Link to post
Share on other sites

FloatingFatMan

Sooner or later, someone is going to shoot this jerk and I doubt there'll be too many mourners.

 

  • Like 1
Link to post
Share on other sites

mudslag

Sooner or later, someone is going to shoot this jerk and I doubt there'll be too many mourners.

 

Karma wont be worthless then

Link to post
Share on other sites

+Zagadka

I don't think that the proper term for such excessive douchery exists in the English language.

I can comprehend why people defended him initially, but he seems to be going out of his way to besmirch anything positive anyone said about him. It is almost like meta-trolling from the ACLU.

Link to post
Share on other sites

This topic is now closed to further replies.
 Share

  • Recently Browsing   0 members

    No registered users viewing this page.

  • Similar Content

    • By zikalify
      Ofcom: A third of users found hate speech on video sites in the last 90 days
      by Paul Hill



      The UK’s digital regulator, Ofcom, has published new findings that suggest a third of people who accessed video-sharing websites such as YouTube came across hateful content in the last three months suggesting content policing may not be working. The new findings were released to coincide with new rules, published by Ofcom, which video-sharing platforms (VSPs) must comply with.

      Ofcom’s study found that a third of users had found hateful content online; the regulator said the content was normally directed towards certain racial groups, religious groups, transgender people and according to sexual orientation.

      Beyond that content, a quarter of those asked said they had been exposed to bullying, abusive behaviour and other threats. A fifth of respondents said that they had witnessed or experienced racist content online and those from a minority ethnic background were more likely to have encountered this content.

      As younger people tend to be more adept with technology, its unsurprising to hear from Ofcom that 13- to 17-year-olds were more likely to have been exposed to harmful content online in the last three months. Seven in ten of VSP users who responded said they came across harmful content but this rose to eight in ten among 13- to 17-year-olds.

      The regulator also found that 60% of VSP users that responded were unaware of the safety and protection features on the websites they use and only 25% have ever flagged or reported content they thought was harmful. To help raise awareness, Ofcom has told VSPs that they need to introduce clear upload rules, make it easy to flag or report content, and it said that adult sites should introduce age-verification systems.

      If sites fail to comply with Ofcom’s decisions, it will investigate and take action. Some of the measures it could enforce include fines, requiring a provider to take specific actions, and in serious cases, it could restrict access to the service.

    • By Jay Bonggolto
      Instagram will now block accounts that send abusive Direct Messages
      by Jay Bonggolto



      Instagram's efforts to combat cyber bullying took a more advanced approach in 2019 when it introduced an artificial intelligence-powered tool designed to detect offensive comments and persuade people to make their statements less hurtful. Last year, the service made it possible for users to bulk delete comments they would find abusive.

      However, Instagram doesn't currently use technology to monitor and prevent hate speech and bullying in Direct Messages (DMs), noting that these conversations are private. Today, the Facebook-owned service announced new changes to how it addresses abusive messages and penalizes people who send them. These measures are introduced in the wake of racist attacks on footballers in the UK, including Marcus Rashford, Anthony Martial, Axel Tuanzebe, and Lauren James from Manchester United.

      Before, users who sent DMs that violated Instagram's rules would only be prevented from using that feature for a set period. Moving forward, the same violation will result in account removal. And if someone tries to set up a new account in order to circumvent Instagram's rules, the same penalty applies.

      Instagram also vows to work with UK law enforcers when it receives legal requests from authorities for information related to hate speech cases. There are restrictions, though, for legally invalid requests.

      The service is also seeking input from its community as part of an effort to develop a new capability that will address abusive messages users see in their DMs. It's not thoroughly clear how this upcoming feature will work, but Instagram is planning to release it in the next few months.

      In addition, Instagram will soon allow everyone to disable DMs from people they don’t follow, a capability that's already available to business and creator accounts as well as personal accounts in a few territories. Of course, users can already switch off tags or mentions and block users who send them unwanted DMs. That said, the latest capabilities expand how people control their experience on the platform.

    • By Usama Jawad96
      Twitter updates its policies to prohibit racism
      by Usama Jawad

      Back in July 2019, Twitter updated its platform guidelines to prohibit hate speech against religious groups. Then in March 2020, it once again expanded them to include hateful conduct that dehumanizes people based on age, disability, or disease. Today, the company is banning hate speech that dehumanizes based on race, ethnicity, and national origin.



      In a blog post, Twitter has indicated that hate speech based on race, ethnicity, and national origin will be promptly removed from the platform as soon as it is reported. The company will also be using its automated processes to detect and remove content it deems hateful. Repeat offenders who violate this guideline may get their accounts temporarily suspended as well. The firm has posted the following examples of Tweets that it characterizes as hateful conduct based on the expanded policies:

      All [national origin] are cockroaches who live off of welfare benefits and need to be taken away. People who are [race] are leeches and only good for one thing. [Ethnicity] are mail-order bride scum. There are too many [national origin/race/ethnicity] maggots in our country and they need to leave. Twitter is also working with third-parties to better understand how it can combat this problem. It says that dehumanizing behavior can lead to real-world violence as well, and as such, the ultimate goal is to keep people safe from hateful behavior globally - both online and offline.

    • By Jay Bonggolto
      Facebook will examine potential racial bias in its algorithms
      by Jay Bonggolto

      Facebook recently confirmed its plan to perform an internal audit on the way it manages hate speech on the platform in light of boycotts by major advertisers over the issue. Walmart, Verizon, Ford, and Microsoft are some of the big ad spenders on the social network to have joined the "Stop Hate for Profit" campaign calling on advertisers to withdraw their spending on the platform until Facebook addresses hate speech and racial issues.

      Now, the social media giant is forming new teams to study whether its algorithms contain bias toward Black, Hispanic, and other minority groups in the U.S., The Wall Street Journal reports, citing sources with knowledge of the matter. The internal teams will examine both the main Facebook platform and Instagram, comparing the effects of these algorithms on the service's minority and white users. This was confirmed by an Instagram representative.

      Facebook is also conducting discussions with third-party experts and civil rights groups on how to examine race in a reliable and consistent manner as part of this new effort. Regarding the new plan, Instagram's Stephanie Otway said, "It’s early; we plan to share more details on this work in the coming months".

      The team being created for Instagram is called the “equity and inclusion team”, although it doesn't have a leader for now. Meanwhile, Facebook's dedicated team is called the "Inclusivity Product Team", tasked with making consultations with a council of Black users and racial issue experts. The goal is to work with other product teams and develop new features for the platform designed to support minority users.

      Source: The Wall Street Journal

    • By Ather Fawaz
      Cisco increasing its contributions to half a billion dollars to combat racism and COVID-19
      by Ather Fawaz

      Cisco, one of the biggest manufacturers of networking equipment, announced that it will be putting in more money to mitigate the effects of the economic downturn caused by the COVID-19 pandemic and to tackle systemic racism. The funds will be added to Cisco's contribution of $275 million dollars in ongoing efforts to help homeless people in Silicon Valley and fighting the pandemic. This will increase the total investment to half a billion dollars and include the initiative to curb racial inequality.

      With regard to the initiative, Chief Executive Officer of Cisco, Chuck Robbins, commented that the current efforts of corporate social responsibility programs worldwide fall short and have led to centuries of inequality and injustice:

      The California giant's announcement comes at a time when nationwide protests have sprung up after an African-American man named George Floyd was killed in police custody. Other prominent tech firms have chimed in and have taken part in reducing social injustice and racism as well. Microsoft, IBM, Amazon have all taken measures to curb the use of facial recognition systems that could potentially demonstrate racial bias in the past week.

      Source: Bloomberg