Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Security

470 Articles
article-image-sugar-operating-system-a-new-os-to-enhance-gpu-acceleration-security-in-web-apps
Savia Lobo
23 Aug 2018
3 min read
Save for later

Sugar operating system: A new OS to enhance GPU acceleration security in web apps

Savia Lobo
23 Aug 2018
3 min read
Researchers at the University of California, Irvine presented Sugar (Secure GPU Acceleration), a new OS solution to enhance the security of GPU acceleration for web apps. Their research paper titled, Sugar: Secure GPU Acceleration in Web Browsers, is a collective effort of Zhihao Yao et al. Recently, GPU based graphics acceleration in web apps has become increasingly popular. WebGL is the key component which provides OpenGL--such as graphics for web apps and is currently used in 53% of the top-100 websites. However, several attack vendors have been demonstrated through WebGL making it vulnerable to security attacks. One such example is the Rowhammer attack which took place in May, this year. Although web browsers have patched the vulnerabilities and have added new runtime security checks, the systems are still vulnerable to zero-day vulnerability exploits, especially given the large size of the Trusted Computing Base of the graphics plane. Sugar OS uses a dedicated virtual graphics plane for a web app by leveraging modern GPU virtualization solutions. It enhances the system security since a virtual graphics plane is fully isolated from the rest of the system. Despite GPU virtualization overhead, Sugar achieves high performance. Unlike current systems, Sugar uses two underlying physical GPUs, when available, to co-render the User Interface (UI), One GPU, to provide virtual graphics planes for web apps The other one to provide the primary graphics plane for the rest of the system. Thus, this design not only provides strong security guarantees but also provides enhanced performance isolation. The two GPU designs in Sugar OS for secured web apps The researchers presented two different designs of Sugar in their paper; a single-GPU and a dual-GPU. In both these designs, web apps use the virtual graphics planes created by the virtualizable GPU. The main difference between the two is the primary graphics plane. Single-GPU Design target: They designed a Single-GPU Sugar for machines with a single virtualizable GPU. The main targets of this design are commodity desktops and laptops using Intel processors that incorporate a virtualizable integrated GPU (all Intel Core processors starting from the 4th generation, i.e., Haswell [99]). Primary Graphics plane, in this design, uses the same underlying virtualizable GPU but has exclusive access to the display connected to it. Dual-GPU Design target: The dual-GPU Sugar is designed for machines with two physical GPUs, one of which is virtualizable. The main targets for this design are high-end desktops and laptops that incorporate a second GPU in addition to the virtualizable integrated Intel GPU. Primary graphics plane, here, uses the other GPU, which is connected to the display. However, Dual-GPU Sugar provides better security than single-GPU Sugar, especially against Denial-of-Service attacks. Moreover, dual-GPU Sugar achieves better graphics performance isolation. The researchers demonstrated that Sugar reduces the Trusted Computing Base (TCB) exposed to web apps and thus eliminates various vulnerabilities already reported in the WebGL framework. They also showed that Sugar’s performance is high, providing similar user-visible performance with existing less secure systems. Read more about Sugar OS in detail in its research paper Introducing MapD Cloud, the first Analytics Platform with GPU Acceleration on Cloud A new WPA/WPA2 security attack in town: Wi-fi routers watch out! 5 examples of Artificial Intelligence in Web apps  
Read more
  • 0
  • 0
  • 2590

article-image-microsoft-claims-it-halted-russian-spearphishing-cyberattacks
Richard Gall
22 Aug 2018
3 min read
Save for later

Microsoft claims it halted Russian spearphishing cyberattacks

Richard Gall
22 Aug 2018
3 min read
Microsoft claims it has identified and stopped a number of Russian cyberattacks just last week. In a post published on Monday (August 20), Brad Smith wrote that "Microsoft’s Digital Crimes Unit (DCU) successfully executed a court order to disrupt and transfer control of six internet domains created by a group widely associated with the Russian government and known as Strontium." Not only are the attacks notable because of Strontium's links with the Russian government, but also because of the institutions these 'fake' domains were targeting. One of the domaisn is believed to mimic International Republican Institute, while another is supposedly an imitation of conservative think tank the Hudson Institute. CNN notes that "both think tanks have been critical of Russia." Smith also writes that "other domains appear to reference the U.S. Senate but are not specific to particular offices." Spearphishing explained The attackers are alleged to have used a technique known in cybersecurity as spearphishing. This is where an email or a website is disguised a a reliable and trustworthy source to scam users into handing over information. In this instance, cyberattackers could have been imitating Republican think tanks in order to get staff to hand over information. This isn't the first spearphishing attack that Microsoft claims it has intercepted. Brad Smith writes that 84 fake websites believed to be linked to Strontium have been transferred to Microsoft in the last 2 years. Microsoft has notified the Hudson Institute and the International Republican Institute about the attacks. "Microsoft will continue to work closely with them and other targeted organizations on countering cybersecurity threats to their systems. We’ve also been monitoring and addressing domain activity with Senate IT staff the past several months, following prior attacks we detected on the staffs of two current senators." Next steps: Microsoft is expanding its Defending Democracy Program Microsoft has also announced it will be expanding its Defending Democracy Program with a new initiative called Microsoft AccountGuard. This will "provide state-of-the-art cybersecurity protection at no extra cost to all candidates and campaign offices at the federal, state and local level, as well as think tanks and political organizations we now believe are under attack" (free if you're using Office 365). Read next Do you want to know what the future holds for privacy? It’s got Artificial Intelligence on both sides. A Twitter video shows how voting machines used in 18 states can be hacked in 2 mins Google, Microsoft, Twitter, and Facebook team up for Data Transfer Project
Read more
  • 0
  • 0
  • 2417

article-image-googles-incognito-location-tracking-scandal-could-be-the-first-real-test-of-gdpr
Savia Lobo
21 Aug 2018
4 min read
Save for later

Google’s incognito location tracking scandal could be the first real test of GDPR

Savia Lobo
21 Aug 2018
4 min read
When you ask Google to turn off locations, it actually tracks in incognito mode. This default setting opens up Google to a potentially huge fine as per Europe’s GDPR rules. Google is secretly tracking your moves When users turn off their location tracking, they expect Google to stop detecting where they are, but this is not the case. Google continues as a secret stalker without the consent of the user. Recently, Associated Press News reported about Google continuing to collect a user’s location points, while users think they are safe from being tracked. According to AP news, location tracking by Google continues even if the user disabled it; and following are some of the resulting issues: User settings governing location markers are in different places Location tracking can be "Paused", but not permanently disabled Location tracking continues in Maps, Search and other Google applications regardless of the "Location History" setting. Warnings provided to both iOS and Android users are misleading How is Google’s location tracking violating EU’s new GDPR rules? In the month of May, this year, Europe announced its much anticipated new privacy law known as the General Data Protection Regulation (GDPR). This law has been virtually impacting every technology worldwide. As per the GDPR law, any company operating in the EU or any company that serves EU citizens should abide by its strict new privacy guidelines meant to protect consumers from companies abusing their personal data. Any company failing to comply with these rules faces financial penalties as high as 4 percent of their annual revenue. For Google, this penalty could mean billions of dollars in fine! GDPR’s data minimisation principle states that data collection should be done for specified, explicit and legitimate purposes for which they are processed. Serena Tierney, a partner at VWV law firm and a data protection and privacy specialist, said to The Register, “The legitimate purpose of the data collection must be clear. Is it only used for Google's own internal machine learning algorithms, say, or is it part of a personal profile sold to advertisers?” "It's part of a wider public debate. Is this part of the social contract between society generally (including me) and search engines (including Google) that in return for getting free search, for example, we expect our personal data to be used for personal advertising, with no way for us to opt out?" Tierney continued. Rafe Laguna, an open source infrastructure provider of Open-Xchange, says, “The Google location scandal could be the first real test of GDPR. The regulation states that user consent must be clear, distinguishable and written in plain language.” Google updated its location policies: “Some location data may be saved” Right after Google faced investigation by the AP regarding its location tracking practice, it made some quick updates to its location history feature. According to a report from Associated Press, Google, in this update made on 16th August, acknowledges that it still tracks users via its Google Maps, weather updates, and browser searches services. As per Google’s help page for location history setting, “some location data may be saved as part of your activity on other services, like Search and Maps.” The Location History toggle won’t actually stop Google from tracking users. However, users can turn it off by disabling the “Web and App Activity” option (which is enabled by default). By disabling the option, Google won’t be able to store and track user’s Maps’ data and browser searches for location anymore. To know more about this evolving story in detail, visit Associated Press News’ full coverage. Microsoft Cloud Services get GDPR Enhancements Machine learning APIs for Google Cloud Platform Build an IoT application with Google Cloud [Tutorial]
Read more
  • 0
  • 0
  • 3162
Visually different images

article-image-git-bug-a-new-distributed-bug-tracker-embedded-in-git
Melisha Dsouza
20 Aug 2018
3 min read
Save for later

Git-bug: A new distributed bug tracker embedded in git

Melisha Dsouza
20 Aug 2018
3 min read
git-bug is a distributed bug tracker that is embedded in git. Using git's internal storage ensures that no files are added in your project. You can push your bugs to the same git remote that you are already using to collaborate with other people. The main idea behind implementing a distributed bug tracker in Git was to stop relying on a web service somewhere to deal with bugs. Browsing and editing bug reports offline wouldn’t be much of a pain, thanks to this implementation. While git-bug addresses a pressing need, note that the project is not yet available for full fledged use and is currently a proof of concept released just 3 days ago at version 0.2.0. Reddit is abuzz with views on the release. A user quotes- Source: reddit.com Certain users also had counter thoughts on the cons of the release - Source: reddit.com   Now that you want to get your hands on git-bug, let’s look at how to get started. Installing git-bug, Linux packages needed and CLI usage for its implementation To install the git-bug, all you need to do is execute the following command- go get github.com/MichaelMure/git-bug If it's not done already, add golang binary directory in your PATH: export PATH=$PATH:$GOROOT/bin:$GOPATH/bin You can set pre-compiled binaries by following 3 simple steps: Head over to the release page and download the appropriate binary for your system. Copy the binary anywhere in your PATH Rename the binary to git-bug (or git-bug.exe on windows) The only linux packge needed for this release is the Archlinux (AUR) Further, you can use the CLI to implement the git-bug using the following commands- Create a new bug: git bug new Your favorite editor will open to write a title and a message. You can push your new entry to a remote: git bug push [<remote>] And pull for updates: git bug pull [<remote>] List existing bugs: git bug ls   Use commands like show, comment, open or close to display and modify bugs. For more details about each command, you can run git bug <command> --help or scan the command's documentation. Features of the git-bug #1 Interactive User Interface for the terminal Use the git bug termui  command to browse and edit bugs. This short video will demonstrate how easy and interactive it is to browse and edit bugs #2 Launch a rich Web UI Take a look at the awesome web UI that is obtained with git bug webui. Source: github.com     Source: github.com   This web UI is entirely packed inside the same go binary and serve static content through a localhost http server. It connects to  backend through a GraphQL API. Take a look at the schema for more clarity. The additional features that are planned include media embedding import/export of github issue extendable data model to support arbitrary bug tracker inflatable raptor Every new release is expected to come with exciting new features, it is also coupled with a few minor constraints. You can check out some of the minor inconveniences as listed out on the github page. We can’t wait for the release to be in a fully working condition. But before that, if you need any additional information on how the git-bug works, head over to the github page. Snapchat source code leaked and posted to GitHub GitHub open sources its GitHub Load Balancer (GLB) Director Homebrew’s Github repo got hacked in 30 mins. How can open source projects fight supply chain attacks?
Read more
  • 0
  • 0
  • 10152

article-image-16-year-old-hacked-into-apples-servers-accessed-extremely-secure-customer-accounts-for-over-a-year-undetected
Melisha Dsouza
20 Aug 2018
3 min read
Save for later

16 year old hacked into Apple’s servers, accessed ‘extremely secure’ customer accounts for over a year undetected

Melisha Dsouza
20 Aug 2018
3 min read
The world's first trillion-dollar public company- Apple, had its servers hacked. By a Melbourne based teenage schoolboy aged 16. Yes, Read that again. That’s how safe your data is at Apple, the most privacy-conscious of the FAANG tech giants. The student, whose name cannot be publicly revealed due to his age and reputation in the hacking community, reportedly pleaded guilty to his actions in an Australian Children's Court this week. “Dream of working at Apple” leads teen to hack into its servers The accused juvenile, not new to cybercrime, is well known in the international hacking community. His ability to develop computerized tunnels and online bypassing systems to hide his identity served him well until a raid on his family home last year exposed hacking files and instructions all saved in a folder interestingly named “hacky hack hack”. Reportedly fascinated with the tech giant, the 16-year old confessed that the hacking took shape as someday he had plans to work for Apple, a Melbourne court reported. He hacked into Apple’s mainframe, downloaded internal files and accessed customer accounts. The teen managed to obtain customers’ authorized keys – that could grant access to user accounts to anybody. Which, by the way, are considered to be extremely secure. What is surprising is that, he hasn’t hacked into Apple just once but multiple times over the course of the past year. In spite of downloading 90GB of secure files and accessing customer accounts, Apple has denied that customers were affected in real time. The company testified that it identified the security breach and notified the FBI, which in turn referred the matter to the Australian federal police. A prosecutor further threw some light on the incident by acknowledging that "Two Apple laptops were seized and the serial numbers matched the serial numbers of the devices which accessed the internal systems" He further added that, "A mobile phone and hard drive were also seized whose IP address matched those detected in the breaches." A company guardian tried to provide solace to its customers by releasing a statement saying that they vigilantly protect their networks and have dedicated teams of information security professionals that work to detect and respond to threats. He added, “In this case, our teams discovered the unauthorized access, contained it, and reported the incident to law enforcement. We regard the data security of our users as one of our greatest responsibilities and want to assure our customers that at no point during this incident was their personal data compromised.” The boy’s audacity is further highlighted by the fact that he shared details of his hacking with members of a WhatsApp group. He pleaded guilty and will return to the court for sentencing in September. However, the magistrate has decided to announce the sentence conferred, by next week because of the complexities involved in the case. Head over to fossbytes for a detailed coverage of the case. Apple stocks soar just shy of $1 Trillion market cap as revenue hits $53.3 Billion in Q3 earnings 2018 Twitter’s trying to shed its skin to combat fake news and data scandals, says Jack Dorsey Timehop suffers data breach; 21 million users’ data compromised    
Read more
  • 0
  • 0
  • 4025

article-image-google-employees-protest-for-censored-search-engine-project-china
Fatema Patrawala
17 Aug 2018
4 min read
Save for later

1k+ Google employees frustrated with continued betrayal, protest against Censored Search engine project for China

Fatema Patrawala
17 Aug 2018
4 min read
About a thousand Google employees frustrated with a series of controversies involving Google have signed a letter to demand transparency on building a censored search engine for China. The project named Dragonfly is a censored search engine for the Chinese market. In the letter employees mentioned, “Currently we do not have the information required to make ethically-informed decisions about our work, our projects, and our employment.” The letter published by the Buzzfeed news was circulated on Google’s internal communications system and is signed by about 1400 Googlers. The Dragonfly project will be Google’s return to China after 8 years of withdrawal from its decision to protest against censorship and government hacking. China has the world’s largest internet audience but has frustrated American tech giants with content restrictions or outright blockages of services including Facebook and Instagram. Crisis already hailing in Google This is not the first time Google’s outspoken workforce has been agitated by changes in strategy. In April, the internet company’s employees spoke out against its involvement in a Pentagon program that uses artificial intelligence to improve weaponry. Over 4,000 employees signed a petition asking the company to cancel it. A dozen engineers resigned in protest, and Google eventually promised not to renew the contract. Following that uproar, Google published AI ethics guidelines for the company. The letter about Dragonfly that's currently being circulated inside the company, argues that those guidelines are not enough and employees further added, "As a company and as individuals we have a responsibility to use this power to better the world, not to support social control, violence, and oppression," the letter reads. "What is clear is that Ethical Principles on paper are not enough to ensure ethical decision making. We need transparency, oversight, and accountability mechanisms sufficient to allow informed ethical choice and deliberation across the company." What does Google’s management say Allison Day, a program manager at Google is not shocked by this outrage and says to the Buzzfeed news, “I can see the bottom line for any corporation is growth, and [China] represented a gigantic market,” she said. “The ‘Don’t be Evil’ slogan or whatever is, you know… It’s not a farce. I wouldn’t go so far as to say that. But it is a giant corporation, and its bottom line is to make money.” Google CEO Sundar Pichai has repeatedly expressed interest in the company making a return to China, which it pulled out of for political reasons in 2010. Pichai’s apparent decision to return, which was not addressed companywide before Thursday, has caused some employees to consider leaving the company altogether. “There are questions about how [Dragonfly] is implemented that could make it less concerning, or much more concerning,” an anonymous Google employee said. “That will continue to be on my mind, and the mind of other Googlers deciding whether to stay.” The Dragonfly project secrecy Two Google employees who were working on Dragonfly were so disturbed by the secrecy that they quit the team over it. Developers who were working on the project had been asked to keep Dragonfly confidential — not just from the public, but also from their coworkers. Even more upsetting to some employees is the fact that the company has blocked off internal access to Dragonfly’s code. Managers also shut down access to certain documents pertaining to the project, according to the Intercept. Employees feel that this is a special kind of betrayal and erosion of trust because they talk and act like, “Once you’re at Google, you can look up the code anywhere in the code base and see for yourself.” “We pride ourselves on having an open and transparent culture,” said the anonymous Google developer. “There [are] definitely employees at the company who are very frustrated because that’s clearly not true.” Google has not responded to specific questions about Dragonfly from the Intercept, nor to Bloomberg, nor to BuzzFeed News, only saying in a statement, “We don’t comment on speculation about future plans.” An anonymous Google developer said, “Even though a lot of us have really good jobs, we can see that the difference between us and the leadership is still astronomical. The vision they have for the future is not our vision.” Google releases new political ads library as part of its transparency report Google is missing out $50 million because of Fortnite’s decision to bypass Play Store Google’s censored Chinese search engine is a stupid, stupid move, says former exec Lokman Tsui
Read more
  • 0
  • 0
  • 2465
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-evaluation-of-third-party-cookie-policies-reveals-a-lineup-of-never-seen-currently-unblockable-web-tracking-techniques
Melisha Dsouza
17 Aug 2018
5 min read
Save for later

Evaluation of Third-Party Cookie Policies reveals a lineup of never-seen, currently unblockable web-tracking techniques

Melisha Dsouza
17 Aug 2018
5 min read
Identifying and authenticating users on the web is a cakewalk, thanks to the use of HTTP cookies. They allow website developers to store user’s website preferences or authentication tokens in the browsers. On the other hand, users can remain logged into a website without the need to re-enter their credentials again and again. Win-Win situation for everybody, right? Hold your horses. Due to the ever-evolving web, the way these cookies are implemented leave some space for hackers to perform intrusive attacks. Exploiting this domain, researchers at Belgium's Catholic University in Leuven bagged the Distinguished Paper prize this year at the Usenix Security Conference for their award-winning presentation on, “Who Left Open the Cookie Jar? A Comprehensive Evaluation of Third-Party Cookie Policies”. How did the team discover these web security loopholes? The authors managed to reveal an array of surprisingly devastating and never-seen-before tracking techniques. These techniques were to identify web-users who were using privacy tools that were supplied by browser-vendors and also third-party tracking-blocking tools. They tested a total of 7 browsers and 46 browser extensions. The tracking techniques used  Appcache API; "lesser-known HTML tags"; the Location response-header; various <meta> redirects; Javascript in PDF tables, Javascript's location.href property; and various service workers to track users across various sites. These techniques managed to bypass the privacy protection settings of the stock browser privacy protections. Apart from that, they also managed to fiddle with the latest privacy settings of Firefox. The techniques were advanced enough to work against popular cookie-blocking/ad-blocking/script-blocking browser extensions. Thankfully, there are no real-world concerns about these techniques being exploited. The researchers tipped off the browser vendors before they went public. This should stand as a lesson for browsers to be better equipped to defend against these tactics.  But until then, we're all vulnerable to websites using these tactics to track virtually everywhere! Here is a snapshot of the results that the team came across: Source: wholeftopenthecookiejar.eu Exploits and their Countermeasures as explored by the Researchers The team has not only come up with a list of 10 exploits but also have suggested measures to combat them. Here is the list, in brief, to give you a short gist- #1 Bypasses for the Opera AdBlocker discovered While the built-in ad blocker is enabled, the team discovered that requests to cross-site blacklisted domains can still be sent using various mechanisms in Opera. #2 Various bypasses discovered for the same-site cookie policy in Edge The same-site cookie policy implemented by Edge can be bypassed in multiple ways. #3 The option to block third-party cookies in Safari 10 does not exclude cookies set in the first-party context from future cross-site requests In Safari 10 when users enable "allow cookies from the current website only", cookies that are set in a first-party context are still included in cross-site requests. Safari blocks only the setting of cookies and not the sending of cookies. #4 Enabling the option to block third-party cookies in Edge has no effect Even when users enable the option to block third-party cookies in Edge, they are still included in all requests. #5 The option to block third-party cookies can be bypassed in Chromium through PDF files JavaScript embedded in PDF files can be used to send GET or POST requests to a cross-site domain. In Chromium, this bypasses the option to block third-party cookies. Affected Browsers are Chrome and Opera. #6 Cross-site requests initiated by PDF files bypass the WebExtentension API provided by Chromium Researchers found that extensions such as ad blockers or privacy extensions cannot intercept requests initiated by PDF files that are opened in Chrome or Opera through the WebExtension API. #7 Bypasses for the Firefox Tracking Protection discovered Firefox Tracking Protection can be bypassed easily by various mechanisms. Cross-site requests directed at blacklisted domains can be sent while this counter measurement is enabled. #8 Requests initiated by the AppCache API are not easily distinguished from requests initiated by browser background processes. Once again, in the Firefox browser, It is posing to be a difficult task for extension developers to distinguish requests initiated by the browsers background processes from requests initiated by websites. #9 Requests to fetch the favicon are not interceptable by Firefox extensions Looks like Firefox had a lot to fix in its extensions, as they were not able to intercept (cross-site) requests to fetch the favicon through the WebExtension API. But this stands fixed right on time. #10 Same-site cookie policy bypass discovered in Chromium Prerender functionality can be leveraged to initiate cross-site requests. This can be done including same-site cookies assigned the value strict. This bug was not detected anymore for multiple versions starting from Chrome 62, however, the bug returned in Chrome 66, 67 and 68. You can read the entire catalog to understand how your cookies are at stake (pun intended). The browser vendors have been made aware of these bugs and solutions have been proposed to rectify browser API’s and tools to deal with these exploits. Along with the aforementioned reports, wholeftopenthecookiejar.eu includes a breakdown of every test that researchers carried out against each of the 7 browsers, 46 extensions, and what version. You can read the paper presented by Gertjan Franken, Tom Van Goethem and Wouter Joosen for an inside view of why they won the award and we are sure you will agree with the same! 10 great tools to stay completely anonymous online Mozilla’s new Firefox DNS security updates spark privacy hue and cry Top 5 cybersecurity trends you should be aware of in 2018
Read more
  • 0
  • 0
  • 2349

article-image-google-releases-new-political-ads-library-as-part-of-its-transparency-report
Natasha Mathur
16 Aug 2018
3 min read
Save for later

Google releases new political ads library as part of its transparency report

Natasha Mathur
16 Aug 2018
3 min read
Google, yesterday, released an archive of political ads purchased on its platforms. The new library of political ads reveals how much money is spent on these ads across different states and congressional districts, along with a list of top advertisers. Political ads feature federal candidates or currently elected federal officeholders. Google has been modifying its transparency report by adding different sections over the years due to European privacy laws, encryption adoption on websites i.e. HTTPS, among other evolving policy and user expectations. Read also: EU slaps Google with $5 billion fine for the Android antitrust case The latest archive is another newly added section in the company's regular transparency report This report shares data revealing “how the policies and actions of governments and corporations affect privacy, security, and access to information. This is Google’s efforts to make things more transparent when it comes to online political advertisements. Now, for any advertiser purchasing election ads on Google in the U.S., they have to “provide a government-issued ID and other key information that confirms they are a U.S. citizen or lawful permanent resident, as required by law. We also required that election ads incorporate a clear “paid for by” disclosure”, says Google. The new election ad library is searchable, downloadable and provides information about the ads with the highest views, the latest election ads running on our platform, and specific advertisers’ campaigns. The data from the Ad Library is publicly available on Google Cloud’s BigQuery. This data is particularly helpful for researchers, political watchdog groups and private citizens as they can leverage this data to develop charts, graphs, tables or other visualizations of political ads on Google Ads services. Apart from Google, Facebook and Twitter are other tech giants, who launched ad archives in recent months. Twitter ad archives are a part of the company’s increased transparency efforts. “We clearly label and show disclaimer information for federal political campaigning ads,” says Twitter. Facebook has been under a lot of controversy regarding advertisements, especially after an outcry over Russians’ alleged purchase of political ads during the 2016 elections. Also, A.G., Bob Ferguson, last month, proved Facebook guilty of providing discriminatory advertisements on its platform. Facebook, now has its own political ad archive that shows information about who paid for these ads along with other details. Google seems to be following Twitter and Facebook’s footsteps when it comes to political and issue-based advertising on its platform. Whether this comes at a right time, with the election season coming up soon, is another matter to be debated.   The new database is updated every week and anyone can see the newly uploaded ads and the advertisers uploading these ads. Google mentioned in their blog that despite the Ad Library providing many new insights, it’s still “working with experts in the U.S. and around the world to explore tools that capture a wider range of political ads—including ads about political issues (beyond just candidate ads), state and local election ads, and political ads in other countries”. Google’s aim with this is to protect these campaigns from digital attacks. “We hope this provides unprecedented, data-driven insights into election ads on our platform,” says Google. For more information regarding Google’s new political ad archive, check out the official Google blog post. Facebook must stop discriminatory advertising in the US, declares Washington AG, Ferguson Google’s new facial recognition patent uses your social network to identify you! Google is missing out $50 million because of Fortnite’s decision to bypass Play Store
Read more
  • 0
  • 0
  • 2638

article-image-twitter-video-shows-voting-machines-hacked-in-mins
Fatema Patrawala
16 Aug 2018
3 min read
Save for later

A Twitter video shows how voting machines used in 18 states can be hacked in 2 mins

Fatema Patrawala
16 Aug 2018
3 min read
At the 26th Annual DEFCON Conference in Vegas last week, attendees were reminded of US election infrastructure being susceptible to ulterior motives, by an alarming video posted on Twitter. https://twitter.com/RachelTobac/status/1029449569266884608 Rachel Tobac, CEO of SocialProof Security demonstrated on her Twitter status about the voting machines hacked in under two minutes. SocialProof Security provides assessments for social engineering based security. Social engineering involves tricking people into giving up information that lets hackers bypass physical and computer security systems. It’s most commonly done with a simple phone call, talking to a tech support agent into resetting a password or getting information about a company’s network by asking an unwary staffer few leading questions. Tobac explained that accessing the voting machine’s admin function is synonymous toopening the hood of a car with a release button, unplugging the card reader, picking a lock and turning on a machine with a ballpoint pen. The model of voting machine used was the Premier AccuVote TS or TSX which is used in more than 18 states for elections. Jack Braun, organizer of the Voting Village commented to the Wall Street Journal, “This is not the cyber mature industry.” While the National Association of Secretaries of State, one of the biggest providers of election supplies in the US, issued a statement discrediting the hackers: “Our main concern with the approach taken by DEFCON is that it uses a pseudo environment which in no way replicates state election systems, networks, or physical security,” it said. “Providing conference attendees with unlimited physical access to voting machines,” NASS said, “does not replicate accurate physical and cyber protections established by state and local governments before and on Election Day.” This is the second year in a row where DEFCON have hacked election systems with the Voting Village. Other experiments included an 11 year girl old hacking a replica of Florida secretary of state website and changing the results in 10 minutes. There were suggestions to use blockchain based voting systems to maintain the integrity of elections. Regardless of its implementation this is an area of concern and should be addressed to alleviate tampering of future elections. 7 Black Hat USA 2018 conference cybersecurity training highlights: Hardware attacks, IO campaigns, Threat Hunting, Fuzzing, and more DCLeaks and Guccifer 2.0: How hackers used social engineering to manipulate the 2016 U.S. elections Twitter allegedly deleted 70 million fake accounts in an attempt to curb fake news
Read more
  • 0
  • 0
  • 1709

article-image-twitter-may-get-a-revamped-core-to-combat-fake-news
Sugandha Lahoti
16 Aug 2018
2 min read
Save for later

Twitter's trying to shed its skin to combat fake news and data scandals, says Jack Dorsey

Sugandha Lahoti
16 Aug 2018
2 min read
Amidst the discussions going on around social media websites regulating their content or facing legal actions, Twitter CEO Jack Dorsey announced plans to rethink the core of how Twitter works. In an interview with the Washington Post, Dorsey said,  that he is experimenting with features that would promote alternative viewpoints in Twitter’s timeline to address misinformation and reduce echo chambers. “The most important thing that we can do is we look at the incentives that we’re building into our product,” Dorsey said. “Because they do express a point of view of what we want people to do — and I don't think they are correct anymore.” https://twitter.com/jack/status/1029846451524960261 Dorsey’s move is a clear indication of the fact that Silicon Valley leaders are getting serious about improving safety, security, and privacy across their services. In recent months, Twitter has made several moves to combat fake news and other data related scandals. Earlier this month, Apple, Facebook, and Spotify took action against Alex Jones. Initially, Twitter allowed Jones to continue using its service. But on Tuesday, Twitter imposed a seven-day “timeout” on Jones after he encouraged his followers to get their “battle rifles” ready against critics in the “mainstream media” and on the left. Last month, the social media giant allegedly deleted 70 million fake accounts in an attempt to curb fake news. It has been constantly suspending fake accounts which are inauthentic, spammy or created via malicious automated bots. Another solution Twitter is exploring is to surround false tweets with factual context. Dorsey said, that more context about a tweet, including tweets that call it out as obviously fake could help people make judgments for themselves. It is planning to label automated accounts; Legislators and federal lawmakers have already proposed putting such requirements into law. The social media website is also auditing existing accounts for signs of automated sign-up and improving the overall sign-up process. What is left to see now is whether Twitter can actually effectively implement these claims. Or Dorsey’s statements will go down the drain. You can read Dorsey’s entire interview on the Washington Post. How to stay safe while using Social Media Facebook plans to use Bloomsbury AI to fight fake news YouTube has a $25 million plan to counter fake news and misinformation
Read more
  • 0
  • 0
  • 2127
article-image-foreshadow-l1-terminal-fault-in-intels-chips
Melisha Dsouza
16 Aug 2018
5 min read
Save for later

Meet ‘Foreshadow’: The L1 Terminal Fault in Intel’s chips

Melisha Dsouza
16 Aug 2018
5 min read
Intel's’ chips have been struck with yet another significant flaw called ‘Foreshadow’. This flaw, alternatively called as L1 Terminal Fault or L1TF, targets Intel’s Security Guard Extensions (SGX) within its Core chips. The US government’s body for computer security testified that an attacker could take advantage of this vulnerability in Intel’s chips to obtain sensitive information. This security flaw affects processors released right from 2015. Thankfully,  Intel has released a patch to combat the problem. Check the full list of affected hardware on Intel's website. While Intel confirmed that they are not aware of reports that any of these methods have been used in real-world exploits, the tech giant is now under scrutiny. This was bound to happen as Intel strikes a  hattrick following two similar attacks - Spectre and Meltdown - that were discovered earlier this year in January. Intel confirms that future processors would be built in such a way as to not be affected by Foreshadow. How does Foreshadow affect your data? The flaw was first brought to Intel’s notice by researchers from KU Leuven University in Belgium and others from the universities of Adelaide and Michigan. Foreshadow can exploit various flaws in a computing technique known as speculative execution. It can specifically target a lock box within Intel’s processors. This would let a hacker leak any data desired. To give you a gist, a  processor can run more efficiently by guessing the next operation to be performed. A correct prediction will save resources, while work based on an incorrect prediction gets scrapped. However, the system leaves behind clues like how long it will take the processor to fulfill a certain request. This can be used by an attacker to find weaknesses, ultimately gaining the ability to manipulate what path the speculation takes. Thus, hacking into the data at opportune moments that leaks out of a process's data storage cache. Speculative execution is important to guard against, because an attacker could use them to access data and system privileges meant to be off-limits. The most intriguing part of the story, as stated by hardware security researcher and Foreshadow contributor Jo Van Bulck is,  “Spectre is focused on one speculation mechanism, Meltdown is another, and Foreshadow is another”.   "This is not an attack on a particular user, it’s an attack on infrastructure."                          YUVAL YAROM, UNIVERSITY OF ADELAIDE   After the discovery of Spectre and Meltdown, the researchers found it only too fitting to look for speculative execution flaws in the SGX enclave. To give you an overview, Security Guard Extensions, or SGX, were originally designed to protect code from disclosure or modification. SGX is included in 7th-generation Core chips and above, as well as the corresponding Xeon generation. It remains protected even when the BIOS, VMM, operating system, and drivers are compromised. Meaning that an attacker with full execution control over the platform can be kept away. SGX, allows programs to establish secure enclaves on Intel processors. These are regions of a chip that are restricted to run code that the computer's operating system can't access or change. The creates a safe space for sensitive data,. Even if the main computer is compromised by malware, the sensitive data remains safe. That apparently isn’t totally the case. Wired furthers stress on the fact that the Foreshadow bug could break down the walls between virtual machines, a real concern for cloud companies whose services share space with other theoretically isolated processes. Watch this youtube video for more clarity on how foreshadow works. https://www.youtube.com/watch?v=ynB1inl4G3c&feature=youtu.be The Quick Fix to Foreshadow Prior to details of the flaw being made public, Intel had created its fix and coordinated its response with the researchers on Tuesday. The fix disables some of chips features that were vulnerable to the attack. Along with software mitigations, the bug will also be patched at the hardware level with Cascade Lake, an upcoming Xeon chip, as well as future Intel processors expected to launch later this year. This mitigation limits the extent to which the same processor can be used simultaneously for multiple tasks, and hence companies running cloud computing platforms could see a significant hit to their collective computing power. On Tuesday, cloud services companies - Amazon, Google and Microsoft - said they had put in place a fix for the problem. Intel is working with these cloud providers—where uptime and performance is key—to “detect L1TF-based exploits during system operation, applying mitigation only when necessary,” Leslie Culbertson, executive vice president and general manager of Product Assurance and Security at Intel, wrote. Individual computer users are advised, as ever, to download and install any software updates available. The research team confirmed that is was unlikely that individuals would see any performance impact. As long as you’re system is patched up, you should be okay. Check out PCWorld’s guide on how to protect your PC against Meltdown and Spectre. You can also head over to the Red Hat Blog for more knowledge on Foreshadow. NetSpectre attack exploits data from CPU memory Intel’s Spectre variant 4 patch impacts CPU performance 7 Black Hat USA 2018 conference cybersecurity training highlights
Read more
  • 0
  • 0
  • 2778

article-image-homebrews-github-repo-got-hacked-in-30-mins-how-can-open-source-projects-fight-supply-chain-attacks
Savia Lobo
14 Aug 2018
5 min read
Save for later

Homebrew's Github repo got hacked in 30 mins. How can open source projects fight supply chain attacks?

Savia Lobo
14 Aug 2018
5 min read
On 31st July 2018, Eric Holmes, a security researcher gained access to Homebrew's GitHub repo easily (He documents his experience in an in-depth Medium post). Homebrew is a free and open-source software package management system with well-known packages like node, git, and many more. It simplifies the installation of software on macOS. The Homebrew repository contains its recently elevated scopes. Eric gained access to git push on Homebrew/brew and Homebrew/homebrew-core. He was able to invade and make his first commit into Homebrew’s GitHub repo within 30 minutes. Attack = Higher chances of obtaining user credentials After getting an easy access to Homebrew’s GitHub repositories, Eric’s prime motive was to uncover user credentials of some of the members of Homebrew GitHub org. For this, he made use of an OSSINT tool by Michael Henriksen called gitrob, which easily automates the credential search. However, he could not find anything interesting. Next, he explored Homebrew’s previously disclosed issues on https://hackerone.com/Homebrew, which led him to the observation that Homebrew runs a Jenkins instance that’s (intentionally) publicly exposed at https://jenkins.brew.sh. With further invasion into the repo, Eric encountered that the builds in the “Homebrew Bottles” project were making authenticated pushes to the BrewTestBot/homebrew-core repo. This further led him to an exposed GitHub API token. The token opened commit access to these core Homebrew repos: Homebrew/brew Homebrew/homebrew-core Homebrew/formulae.brew.sh Eric stated in his post that, “If I were a malicious actor, I could have made a small, likely unnoticed change to the openssl formulae, placing a backdoor on any machine that installed it.” Via such a backdoor, intruders could have gained access to private company networks that use Homebrew. This could further lead to data breach on a large scale. Eric reported this issue to Homebrew developer, Mike McQuaid. Following which, he publicly disclosed the issue on the blog at https://brew.sh/2018/08/05/security-incident-disclosure/. Within a few hours the credentials had been revoked, replaced and sanitised within Jenkins so they would not be revealed in future. Homebrew/brew and Homebrew/homebrew-core were updated so non-administrators on those repositories cannot push directly to master. The Homebrew team worked with GitHub to audit and ensure that the given access token wasn’t used maliciously, and didn’t make any unexpected commits to the core Homebrew repos. As an ethical hacker, Eric reported the vulnerabilities he found to the Homebrew team and did no harm to the repo itself. But, not all projects may have such happy endings. How can one safeguard their systems from supply chain attacks? The precautions which Eric Holmes took were credible. He informed the Homebrew developer. However, not every hacker has good intentions and it is one’s responsibility to make sure to keep a check on all the supply chains associated to an organization. Keeping a check on all the libraries One should not allow random libraries into the supply chain. This is because it is difficult to partition libraries with organization’s custom code, thus both run with the same privilege risking the company’s security. One should make sure to levy certain policies around the code the company wishes to allow. Only projects with high popularity, active committers, and evidence of process should be allowed. Establishing guidelines Each company should create guidelines for secure use of the libraries selected. For this, a prior definition of what the libraries are expected to be used for should be made. The developers should also be detailed in safely installing, configuring, and using each library within their code. Identification of dangerous methods and how to use them safely should also be taken care of. A thorough vigilance within the inventory Every organization should keep a check within their inventories to know what open source libraries they are using. They should also ensure to set up a notification system which keeps them abreast of which new vulnerabilities the applications and servers are affected. Protection during runtime Organizations should also make use of runtime application security protection (RASP) to prevent both known and unknown library vulnerabilities from being exploited. If in case they notice new vulnerabilities, the RASP infrastructure enables one to respond in minutes. The software supply chain is the important part to create and deploy applications quickly. Hence, one should take complete care to avoid any misuse via this channel. Read the detailed story of Homebrew’s attack escape on its blog post and Eric’s firsthand account of how he went about planning the attack and the motivation behind it on his medium post. DCLeaks and Guccifer 2.0: Hackers used social engineering to manipulate the 2016 U.S. elections Twitter allegedly deleted 70 million fake accounts in an attempt to curb fake news YouTube has a $25 million plan to counter fake news and misinformation
Read more
  • 0
  • 2
  • 7689

article-image-ibms-deeplocker-the-artificial-intelligence-powered-sneaky-new-breed-of-malware
Melisha Dsouza
13 Aug 2018
4 min read
Save for later

IBM’s DeepLocker: The Artificial Intelligence powered sneaky new breed of Malware

Melisha Dsouza
13 Aug 2018
4 min read
In the new found age of Artificial Intelligence, where everything and everyone uses Machine Learning concepts to make life easier, the dark side of the same is can be left unexplored. Cybersecurity is gaining a lot of attention these days.The most influential organizations have experienced a downfall because of undetected malware that have managed to evade even the most secure cyber defense mechanisms. The job just got easier for cyber criminals that exploit AI to empower them and launch attacks. Imagine combining AI with cyber attacks! At last week’s Black Hat USA 2018 conference, IBM researchers presented their newly developed malware “DeepLocker” that is backed up by AI. Weaponized AI seems here to stay. Read Also: Black Hat USA 2018 conference Highlights for cybersecurity professionals All you need to know about DeepLocker Simply put, DeepLocker is a new generation malware which can stealth under the radar and go undetected till its target is reached. It uses an Artificial Intelligence model to identify its target using indicators like facial recognition, geolocation and voice recognition. All of which is easily available on the web these days! What’s interesting is that the malware can hide its malicious payload in carrier applications- like a video conferencing software, and go undetected by most antivirus and malware scanners until it reaches specific victims. Imagine sitting on your computer performing daily tasks. Considering that your profile pictures are available on the internet, your video camera can be manipulated to find a match to your online picture. Once the target (your face) is identified, the malicious payload can be unleashed thanks to your face which serves as a key to unlock the virus. This simple  “trigger condition” to unlock the attack is almost impossible to reverse engineer. The malicious payload will only be unlocked if the intended target is reached. It achieves this by using a deep neural network (DNN) AI model. The simple logic of  “if this, then that” trigger condition used by DeepLocker is transformed into a deep convolutional network of the AI model.   DeepLocker – AI-Powered Concealment   Source: SecurityIntelligence   The DeepLocker makes it really difficult for malware analysts to answer the 3 main questions- What target is the malware after-  Is it after people’s faces or some other visual clues? What specific instance of the target class is the valid trigger condition? And what is the ultimate goal of the attack payload? Now that’s some commendable work done by the IBM researchers. IBM has always strived to make a mark in the field of innovation. DeepLocker comes as no surprise as IBM has the highest number of facial recognition patents granted in 2018. BlackHat USA 2018 sneak preview The main aim of the IBM Researchers- Marc Ph. Stoecklin, Jiyong Jang and Dhilung Kirat-  briefing the crowd in the BlackHat USA 2018 conference was, To raise awareness that AI-powered threats like DeepLocker can be expected very soon To demonstrate how attackers have the capability to build stealthy malware that can circumvent defenses commonly deployed today and To provide insights into how to reduce risks and deploy adequate countermeasures. To demonstrate the efficiency of DeepLocker’s capabilities, they designed and demonstrated a proof of concept. The WannaCry virus was camouflaged in a benign video conferencing application so that it remains undetected by antivirus engines and malware sandboxes. As a triggering condition, an individual was selected, and the AI was trained to launch the malware when certain conditions- including the facial recognition of the target- were met. The experiment was, undoubtedly, a success. The DeepLocker is just an experiment by IBM to show how open-source AI tools can be combined with straightforward evasion techniques to build a targeted, evasive and highly effective malware. As the world of cybersecurity is constantly evolving, security professionals will now have to up their game to combat hybrid malware attacks. Found this article Interesting? Read the Security Intelligence blog to discover more. 7 Black Hat USA 2018 conference cybersecurity training highlights 12 common malware types you should know Social engineering attacks – things to watch out for while online  
Read more
  • 0
  • 0
  • 7537
article-image-introducing-tls-1-3-the-first-major-overhaul-of-the-tls-protocol-with-improved-security-and-speed
Savia Lobo
13 Aug 2018
3 min read
Save for later

Introducing TLS 1.3, the first major overhaul of the TLS protocol with improved security and speed

Savia Lobo
13 Aug 2018
3 min read
The Internet Engineering Task Force (IETF), an organization that defines internet protocols, standardized the latest version of its most important security protocols, Transport Layer Security (TLS). Introducing TLS 1.3. The latest version, TLS 1.3 i.e. RFC 8446 was published on August 10, 2018. This version is the first major overhaul of the protocol, which brings in significant security and performance improvements. https://youtu.be/HFzXrqw-UpI TLS 1.3 vs TLS 1.2 The TLS 1.2 was defined in RFC 5246 and has been in use by a majority of all web browsers for eight years. The IETF organization finalized TLS 1.3, as of March 21, 2018. One can still deploy the TLS 1.2 securely. However, many of the high profile vulnerabilities have exploited certain parts of the 1.2 protocol along with some outdated algorithms. In the new TLS 1.3, all of these problems have been resolved and the included algorithms are said to have no known vulnerabilities. In contrast to the TLS 1.2, the v1.3 has an added privacy for data exchanges. This is done by encrypting more of the negotiation handshake to protect it from eavesdroppers. This helps in protecting the identities of the participants and impedes traffic analysis. In short, the TLS 1.3 has some performance improvements such as faster speed and increased security. Companies such as Cloudfare are making the new TLS 1.3 available to their customers. What’s new in the TLS v1.3? Improved security The outdated and insecure features in the TLS 1.2 removed in the v1.3 include: SHA-1 RC4 DES 3DES AES-CBC MD5 Arbitrary Diffie-Hellman groups — CVE-2016-0701 EXPORT-strength ciphers – Responsible for FREAK and LogJam The cryptographic community was having a constant check to analyze, improve, and validate security in TLS 1.3. It also removes all primitives and features that have contributed to weak configurations and has enabled common vulnerability exploits like DROWN, Vaudenay, Lucky 13, POODLE, SLOTH, CRIME and more. Improved Speed Web performance was affected due to TLS and other encrypted connections. However, the HTTP/2 helped in overcoming this problem. Further, the new version, TLS 1.3, helps in speeding up the encrypted connections even more with features such as TLS false start and Zero Round Trip Time (0-RTT). Simply put, TLS 1.2 requires two round-trips to complete the TLS handshake. On the other hand, the v1.3 requires only one round-trip, which in turn cuts the encryption latency in half. Another interesting feature with the TLS 1.3 is, one can now send data on the first message to the server to the sites which the user has visited previously. This is called a “zero round trip.” (0-RTT). This results in improved load times. Browser support for TLS v1.3 Google has started warning their users in search console that they are moving to TLS version 1.2, as TLS 1 is no longer that safe. TLS version 1.3 is enabled in Chrome 63 for outgoing connections. Support for TLS 1.3 was added back in Chrome 56 and is also supported by Chrome for Android. https://twitter.com/screamingfrog/status/940501282653077505 TLS 1.3 is enabled by default in Firefox 52 and above (including Quantum). They are retaining an insecure fallback to TLS 1.2 until they know more about server tolerance and the 1.3 handshake. TLS 1.3 browser support The other browsers such as IE, Microsoft Edge, Opera, or Safari do not support TLS 1.3 yet. This would take some time while the protocol is being finalized and for browsers to catch up. Most of the remaining ones are in development at the moment. Read more about this in detail, on the IETF blog. Analyzing Transport Layer Protocols Communication and Network Security A new WPA/WPA2 security attack in town: Wi-fi routers watch out! Mozilla’s new Firefox DNS security updates spark privacy hue and cry
Read more
  • 0
  • 0
  • 3399

article-image-7-black-hat-usa-2018-conference-cybersecurity-training-highlights-hardware-attacks-io-campaigns-threat-hunting-fuzzing-and-more
Melisha Dsouza
11 Aug 2018
7 min read
Save for later

7 Black Hat USA 2018 conference cybersecurity training highlights: Hardware attacks, IO campaigns, Threat Hunting, Fuzzing, and more

Melisha Dsouza
11 Aug 2018
7 min read
The 21st International Conference of Black Hat USA 2018, has just concluded. It took place from August 4, 2018 – August 9, 2018 in Las Vegas, Nevada. It is one of the most anticipated conferences of the year for security practitioners, executives, business developers and anyone who is a cybersecurity fanatic and wants to expand their horizon into the world of security. Black Hat USA 2018 opened with four days of technical training followed by the two-day main conference featuring Briefings, Arsenal, Business Hall, and more. The conference covered exclusive training modules that provided a hands-on offensive and defensive skill set building opportunity for security professionals. The Briefings covered the nitty-gritties of all the latest trends in information security. The Business Hall included a network of more than 17,000 InfoSec professionals who evaluated a range of security products offered by Black Hat sponsors. Best cybersecurity Trainings  in the conference: For more than 20 years, Black Hat has been providing its attendees with trainings that stand the test of time and prove to be an asset in penetration testing. The training modules designed exclusively for Black Hat attendees are taken by industry and subject matter experts from all over the world with the goal of shaping the information security landscape. Here’s a look at a few from this year’s conference. #1 Applied Hardware attacks: Embedded and IOT systems This hands-on training was headed by Josh Datko, and Joe Fitzpatrick that: Introduced students to the common interfaces on embedded MIPS and ARM systems Taught them how to exploit physical access to grant themselves software privilege. Focussed on UART, JTAG, and SPI interfaces. Students were given a brief architectural overview. 70% hands-on labs- identifying, observing, interacting, and eventually exploiting each interface. Basic analysis and manipulation of firmware images were also covered. This two-day course was geared toward pen testers, red teamers, exploit developers, and product developers who wished to learn how to take advantage of physical access to systems to assist and enable other attacks. This course also aimed to show security researchers and enthusiasts- who are unwilling to 'just trust the hardware'- to gain deeper insight into how hardware works and can be undermined. #2 Information Operations: Influence, exploit, and counter This fast-moving class included hands-on exercises to apply and reinforce the skills learned during the course of the training. It also included a best IO campaign contest which was conducted live during the class. Trainers David Raymond and Gregory Conti covered information operations theory and practice in depth. Some of the main topics covered were IO Strategies and Tactics, Countering Information Operations and Operations Security and Counter Intelligence. Users learned about Online Personas and explored the use of bots and AI to scale attacks and defenses. Other topics included understanding performance and assessment metrics, how to respond to an IO incident, exploring the concepts of Deception and counter-deception, and Cyber-enabled IO. #3 Practical Vulnerability discovery with fuzzing: Abdul Aziz Hariri and Brian Gorenc trained students on techniques to quickly identify common patterns in specifications that produce vulnerable conditions in the network. The course covered the following- Learning the process to build a successful fuzzer, and highlight public fuzzing frameworks that produce quality results. “Real world" case studies that demonstrated the fundamentals being introduced. Leverage existing fuzzing frameworks, develop their own test harnesses, integrate publicly available data generation engines and automate the analysis of crashing test cases. This class was aimed at individuals wanting to learn the fundamentals of the fuzzing process, develop advanced fuzzing frameworks, and/or improve their bug finding capabilities. #4 Active Directory Attacks for Red and Blue teams: Nikhil Mittal’s main aim to conduct the training was to change how you test an Active Directory Environment. To secure Active Directory, it is important to understand different techniques and attacks used by adversaries against it. The AD environments lack the ability to tackle latest threats. Hence, this training was aimed towards attacking modern AD Environment using built-in tools like PowerShell and other trusted OS resources. The training was based on real-world penetration tests and Red Team engagements for highly secured environments. Some of the techniques used in the course were- Extensive AD Enumeration Active Directory trust mapping and abuse. Privilege Escalation (User Hunting, Delegation issues and more) Kerberos Attacks and Defense (Golden, Silver ticket, Kerberoast and more) Abusing cross-forest trust (Lateral movement across forest, PrivEsc and more) Attacking Azure integration and components Abusing SQL Server trust in AD (Command Execution, trust abuse, lateral movement) Credentials Replay Attacks (Over-PTH, Token Replay etc.) Persistence (WMI, GPO, ACLs and more) Defenses (JEA, PAW, LAPS, Deception, App Whitelisting, Advanced Threat Analytics etc.) Bypassing defenses Attendees also acquired a free one month access to an Active Directory environment. This comprised of multiple domains and forests, during and after the training. #5 Hands-on Power Analysis and Glitching with ChipWhisperer This course was suited for anyone dealing with embedded systems who needed to understand the threats that can be used to break even a "perfectly secure" system. Side-Channel Power Analysis can be used to read out an AES-128 key in less than 60 seconds from a standard implementation on a small microcontroller. Colin O'Flynn helped the students understand whether their systems were vulnerable to such an attack or not. The course was loaded with hands-on examples to teach them about attacks and theories. The course included a ChipWhisperer-Lite, that students could walk away with the hardware provided during the lab sessions. During the two-day course, topics covered included : Theory behind side-channel power analysis, Measuring power in existing systems, Setting up the ChipWhisperer hardware & software, Several demonstrated attacks, Understanding and demonstration glitch attacks, and Analyzing your own hardware #6 Threat Hunting with attacker TTPs A proper Threat Hunting program focused on maximizing the effectiveness of scarce network defense resources to protect against a potentially limitless threat was the main aim of this class. Threat Hunting takes a different perspective on performing network defense, relying on skilled operators to investigate and find the presence of malicious activity. This training used standard network defense and incident response (which target flagging known malware). It focussed on abnormal behaviors and the use of attacker Tactics, Techniques, and Procedures (TTPs). Trainers Jared Atkinson, Robby Winchester and Roberto Rodriquez taught students on how to create threat hunting hypotheses based on attacker TTPs to perform threat hunting operations and detect attacker activity. In addition, they used free and open source data collection and analysis tools (Sysmon, ELK and Automated Collection and Enrichment Platform) to gather and analyze large amounts of host information to detect malicious activity. They used these techniques and toolsets to create threat hunting hypotheses and perform threat hunting in a simulated enterprise network undergoing active compromise from various types of threat actors. The class was intended for defenders wanting to learn how to effectively hunt threats in enterprise networks. #7 Hands-on Hardware Hacking Training: The class, taught by Joe Grand, took the students through the process of reverse engineering and defeating the security of electronic devices. The comprehensive training covered Product teardown Component identification Circuit board reverse engineering Soldering and desoldering Signal monitoring and analysis, and memory extraction, using a variety of tools including a logic analyzer, multimeter, and device programmer. It concluded with a final challenge where users identify, reverse engineer, and defeat the security mechanism of a custom embedded system. Users interested in hardware hacking, including security researchers, digital forensic investigators, design engineers, and executive management benefitted from this class. And that’s not all! Some other trainings include-- Software defined radio, a guide to threat hunting utilizing the elk stack and machine learning, AWS and Azure exploitation: making the cloud rain shells and much more. This is just a brief overview of the BlackHat USA 2018 conference, where we have handpicked a select few trainings. You can see the full schedule along with the list of selected research papers at the BlackHat Website. And if you missed out this one, fret not. There is another conference happening soon from 3rd December to 6th December 2018. Check out the official website for details. Top 5 cybersecurity trends you should be aware of in 2018 Top 5 cybersecurity myths debunked A new WPA/WPA2 security attack in town: Wi-fi routers watch out!  
Read more
  • 0
  • 0
  • 4162