Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-satya-nadella-microsofts-progress-data-ai-business-applications-trust-privacy
Sugandha Lahoti
17 Oct 2018
5 min read
Save for later

Satya Nadella reflects on Microsoft's progress in areas of data, AI, business applications, trust, privacy and more.

Sugandha Lahoti
17 Oct 2018
5 min read
Microsoft CEO, Satya Nadella published his letter to shareholders in the company’s 2018 annual report, on LinkedIn yesterday. He talks about Microsoft’s accomplishments in the past year, results and progress of Microsoft’s workplace, business applications, infrastructure, data, AI, and gaming. He also mentioned the data and privacy rules adopted by Microsoft, and their belief to, “ instill trust in technology across everything they do.” Microsoft’s result and progress Data and AI Azure Cosmos DB has already exceeded $100 million in annualized revenue. The company also saw rapid customer adoption of Azure Databricks for data preparation, advanced analytics, and machine learning scenarios. Their Azure Bot Service has nearly 300,000 developers, and they are on the road for building the world’s first AI supercomputer in Azure. Microsoft also acquired GitHub to recognize the increasingly vital role developers will play in value creation and growth across every industry. Business Applications Microsoft’s investments in Power BI have made them the leader in business analytics in the cloud. Their Open Data Initiative with Adobe and SAP will help customers to take control of their data and build new experiences that truly put people at the center. HoloLens and mixed reality will be used for designing for first-line workers, who account for 80 percent of the world’s workforce. New solutions powered by LinkedIn and Microsoft Graphs help companies manage talent, training, and sales and marketing. Applications and Infrastructure Azure revenue grew 91 percent year-over-year and the company is investing aggressively to build Azure as the world’s computer. They added nearly 500 new Azure capabilities in the past year, focused on both existing workloads and new workloads such as IoT and Edge AI. Microsoft expanded their global data center footprint to 54 regions. They introduced Azure IoT and Azure Stack and Azure Sphere. Modern Workplace More than 135 million people use Office 365 commercial every month. Outlook Mobile is also employed on 100 million iOS and Android devices worldwide. Microsoft Teams is being used by more than 300,000 organizations of all sizes, including 87 of the Fortune 100. Windows 10 is active on nearly 700 million devices around the world. Gaming The company surpassed $10 billion in revenue this year for gaming. Xbox Live now has 57 million monthly active users, and they are investing in new services like Mixer and Game Pass. They also added five new gaming studios this year including PlayFab to build a cloud platform for the gaming industry across mobile, PC and console. Microsoft’s impact around the globe Nadella highlighted that companies such as Coca-Cola, Chevron Corporation, ZF Group, a car parts manufacturer in Germany are using Microsoft’s technology to build their own digital capabilities. Walmart is also using Azure and Microsoft 365 for transforming the shopping experience for customers. In Kenya, M-KOPA Solar, one of their partners connected homes across sub-Saharan Africa to solar power using the Microsoft Cloud. Office Dynamics 365 was used in Arizona to improve outcomes among the state’s 15,000 children in foster care. MedApp is using HoloLens in Poland to help cardiologists visualize a patient's heart as it beats in real time. In Cambodia, underserved children in rural communities are learning to code with Minecraft. How Microsoft is handling trust and responsibility Microsoft motto is “instilling trust in technology across everything they do.” Nadella says, “We believe that privacy is a fundamental human right, which is why compliance is deeply embedded in all our processes and practices.” Microsoft has extended the data subject rights of GDPR to all their customers around the world, not just those in the European Union, and advocated for the passage of the CLOUD Act in the U.S. They also led the Cybersecurity Tech Accord, which has been signed by 61 global organizations, and are calling on governments to do more to make the internet safe. They announced the Defending Democracy Program to work with governments around the world to help safeguard voting and introduced AccountGuard to offer advanced cybersecurity protections to political campaigns in the U.S. The company is also investing in tools for detecting and addressing bias in AI systems and advocating government regulation. They are also addressing society's most pressing challenges with new programs like AI for Earth, a five-year, $50M commitment to environmental sustainability, and AI for Accessibility to benefit people with disabilities. Nadella further adds, “Over the past year, we have made progress in building a diverse and inclusive culture where everyone can do their best work.” Microsoft has nearly doubled the number of women corporate vice presidents at Microsoft since FY16.  They have also increased African American/Black and Hispanic/Latino representation by 33 percent. He concludes saying that “I’m proud of our progress, and I’m proud of the more than 100,000 Microsoft employees around the world who are focused on our customers’ success in this new era.” Read the full letter on Linkedin. Paul Allen, Microsoft co-founder, philanthropist, and developer dies of cancer at 65. ‘Employees of Microsoft’ ask Microsoft not to bid on US Military’s Project JEDI in an open letter. Microsoft joins the Open Invention Network community, making 60,000 of its patents accessible to fellow members
Read more
  • 0
  • 0
  • 3783

article-image-developers-rejoice-github-announces-github-actions-github-connect-and-much-more-to-improve-development-workflows
Melisha Dsouza
17 Oct 2018
5 min read
Save for later

Developers rejoice! Github announces Github Actions, Github connect and much more to improve development workflows

Melisha Dsouza
17 Oct 2018
5 min read
Yesterday, at the GitHub Universe annual developer conference held at San Francisco, the team announced a host of new changes to help developers manage and improve their development workflow. GitHub has been used by 31 million developers in the past year and is the most trusted code hosting platform. It received numerous support from developers all over the globe and the team has decided to appreciate this support by making life easier for developers. Their new upgrades include: Github Actions will help developers automate workflows and build while sharing and executing code inside containers on GitHub. GitHub Connect for facilitating a unified business identity, unified search, and unified contributions. Powerful security tools with the GitHub Security Advisory API Improvements to the GitHub learning lab Let’s look at these updates in depth: #1 GitHub Actions "A lot of the major clouds have built products for sysadmins and not really for developers, and we want to hand power and flexibility back to the developer and give them the opportunity to pick the tools they want, configure them seamlessly, and then stand on the shoulders of the giants in the community around them on the GitHub platform" -GitHub head of platform Sam Lambert ( in an interview to Venture Beat) Software development demands that a project is broken down into hundreds, if not thousands of small steps (depending on the scope of the project) to get the job done faster and efficiently. This means that at every stage of development, teams need to coordinate to understand the progress of each step. Teams need to work concurrently and ensure that their actions don’t overlap or overwrite changes made by other members. Many companies perform these checks manually, using different development tools which takes up a lot of time and effort. Enter Github Actions. This new feature uses code packaged in a Docker container running on GitHub’s servers. Users can set up triggers for events. For instance, introducing new code to a project or packaging an NPM module or sending an SMS  alert. This trigger will set off Actions to take further steps defined by criteria set by administrators. Besides automating tasks, GitHub Actions allows users to connect and share containers to run their software development workflow. They can easily build, package, release, update, and deploy their project in any language, without having to run code themselves. Developer, Team, and Business Cloud plans can use Actions that are available in limited public beta on GitHub. #2 Github Connect "GitHub Connect begins to break down organizational barriers, unify the experience across deployment types, and bring the power of the world’s largest open-source community to developers at work." -Jason Warner, GitHub’s senior vice president of technology. The team has announced that GitHub Connect is now generally available. GitHub Connect comes with new features like unified search, unified business identity, and unified collaborations.  Unified search can search through both the open source code on the site as well as internal code. When searching from GitHub Enterprise instance, users can view search results from public content on GitHub.com The Unified Business Identity feature allows administrators to easily manage user accounts existing across separate Business Cloud installations. Using a single back-end interface, businesses can improve billing, licensing, permissions and policy operations. Many developers come across the issue wherein their contributions are locked behind the firewalls of private companies. Unified contributions, lets developers get credit for the work they’ve done on repositories for businesses in the past. #3 Better Security The new GitHub Security Advisory API, automates vulnerability scans and makes it easier for developers to find threats in their code. GitHub Vulnerability Alert now supports .NET and Java and developers who use these languages will get a heads-up if any dependent code has a security exploit. GitHub will now also start scanning all public repositories for known token formats and developers who accidentally put their security tokens into public code can be at rest. On finding a known token, the team will alert the token provider to validate the commit and contact the account owner to issue a new token. From automating detection and remediation to tracking emergent security vulnerabilities, looks like the team is going all out to improve its security functionalities! #4 The GitHub Learning Lab GitHub Learning Lab helps developers get started with GitHub, manage merge conflicts, contribute to their first open source project, and more. The team announced three new Learning Lab courses-  covering secure development workflows with GitHub, reviewing a pull request, and getting started with GitHub Apps. These courses will be made available to everyone. Developers can create private courses and learning paths, customize course content, and access administrative reports and metrics with the Learning lab. The announcements have caused a buzz among developers on Twitter: https://twitter.com/fatih/status/1052238735755173888 https://twitter.com/sarah_edo/status/1052247186220568577 https://twitter.com/jmsaucier/status/1052322249372590081 It would be interesting to see how these updates shape the use of GitHub in the future. To know more about the announcement, head over to GitHub’s official Blog. GitHub is bringing back Game Off, its sixth annual game building competition, in November RawGit, the project that made sharing and testing code on GitHub easy, is shutting down! GitHub comes to your code Editor; GitHub security alerts now have machine intelligence  
Read more
  • 0
  • 0
  • 2566

article-image-mongodb-switches-to-server-side-public-license-sspl-to-prevent-cloud-providers-from-exploiting-its-open-source-code
Natasha Mathur
17 Oct 2018
3 min read
Save for later

MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code

Natasha Mathur
17 Oct 2018
3 min read
MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code MongoDB, a leading free, and open source general purpose database platform, announced yesterday that it has issued a new software license, the Server Side Public License (SSPL), for the MongoDB community server. This new license will be applied to all the new releases and versions of the MongoDB community server, including the patch fixes for prior versions. “The market is increasingly consuming software as a service, creating an incredible opportunity to foster a new wave of great open source server-side software. Unfortunately, once an open source project becomes interesting, it is too easy for cloud vendors who have not developed the software to capture all of the value while contributing little back to the community,” mentioned Eliot Horowitz, CTO, and co-founder, MongoDB. Earlier, MongoDB was licensed under the GNU AGPLv3 (AGPL). This license allowed the companies to modify and run MongoDB as a publicly available service but only if they open source their software or acquire a commercial license from MongoDB. However, as the popularity of MongoDB grew, some cloud providers started taking MongoDB’s open-source code to offer a hosted commercial version of its database to their users without abiding by the open-source rules. This is why MongoDB decided to switch to the SSPL. “We have greatly contributed to, and benefited from, open source, and are in a unique position to lead on an issue impacting many organizations. We hope this new license will help inspire more projects and protect open source innovation”, said Horowitz. The SSPL is not very different from the AGPL license. Only that SSPL clearly specified the condition for providing open source software as a service. In fact, the new license offers the same level of freedom as the AGPL to the open source community. Companies still have the freedom to use, review, modify and redistribute the software but to use MongoDB as a service, they need to open source the software that they’re using. This is not applicable to customers who have purchased a commercial license from MongoDB. “We are big believers in open source. It leads to more valuable, robust and secure software. However, it is important that open source licenses evolve to keep pace with the changes in our industry. With the added protection of the SSPL, we can continue to invest in R&D and further drive innovation and value for the community”, mentioned Dev Ittycheria, President & CEO, MongoDB. For more information, check out the official MongoDB announcement. MongoDB acquires mLab to transform the global cloud database market and scale MongoDB Atlas MongoDB Sharding: Sharding clusters and choosing the right shard key [Tutorial] MongoDB 4.0 now generally available with support for multi-platform, mobile, ACID transactions and more
Read more
  • 0
  • 0
  • 7421
Visually different images

article-image-mit-plans-to-invest-1-billion-in-a-new-college-of-computing-that-will-serve-as-an-interdisciplinary-hub-for-computer-science-ai-data-science
Bhagyashree R
16 Oct 2018
3 min read
Save for later

MIT plans to invest $1 billion in a new College of computing that will serve as an interdisciplinary hub for computer science, AI, data science

Bhagyashree R
16 Oct 2018
3 min read
Yesterday, MIT announced that they are investing $1 billion for establishing a new college for computing: MIT Schwarzman College of Computing. This college is named after Mr. Schwarzman, the chairman, CEO and co-founder of Blackstone who has contributed $350 million for the college.The college will be dedicated for work in computer science, AI, data science, and related fields. This initiative, according to MIT, is the single largest investment in computing and AI by any American academic institution. MIT Schwarzman College of Computing aims to teach students the foundations of computing. Students will also learn how machine learning and data science can be applied in real life. A curriculum will be designed to satisfy the growing interest in majors that cross computer science with other disciplines. Along with teaching advanced computing, the school will also focus on teaching and researching on relevant policy and ethics. This will educate students about responsibly using these advanced technologies in support of the greater good. Rafael Reif, MIT President, believes that this college will help students and researchers from various disciplines to use computing and AI to advance their disciplines: “As computing reshapes our world, MIT intends to help make sure it does so for the good of all. In keeping with the scope of this challenge, we are reshaping MIT. The MIT Schwarzman College of Computing will constitute both a global center for computing research and education, and an intellectual foundry for powerful new AI tools. Just as important, the College will equip students and researchers in any discipline to use computing and AI to advance their disciplines and vice-versa, as well as to think critically about the human impact of their work.” To attract distinguished individuals from other universities, government, industry, and journalism, the school plans to offer various opportunities. These include selective undergraduate research opportunities, graduate fellowships in ethics and AI, a seed-grant program for faculty, and a fellowship program. Along with these exciting opportunities, fifty new faculty positions will be created, out of which 25 will be appointed to advance computing in the college, and the other 25 will be appointed jointly in the college and departments across MIT. Currently, they have raised a total funding of $650 million of the $1 billion required for the college and its senior administration is actively looking for more contributors. Among the top partners in this initiative is IBM. Ginni Rometty, IBM chairman, president, and CEO, said: “As MIT’s partner in shaping the future of AI, IBM is excited by this new initiative. The establishment of the MIT Schwarzman College of Computing is an unprecedented investment in the promise of this technology. It will build powerfully on the pioneering research taking place through the MIT-IBM Watson AI Lab. Together, we will continue to unlock the massive potential of AI and explore its ethical and economic impacts on society.” MIT Schwarzman College of Computing is one of the most significant structural change to MIT since the early 1950s. According to the official announcement, the college is likely to open in September, next year and the construction work is scheduled to complete in 2022. To read the full announcement, head over to MIT’s official website. MIT’s Transparency by Design Network: A high performance model that uses visual reasoning for machine interpretability IBM’s DeepLocker: The Artificial Intelligence powered sneaky new breed of Malware Video-to-video synthesis method: A GAN by NVIDIA & MIT CSAIL is now Open source
Read more
  • 0
  • 0
  • 2092

article-image-indeed-lists-top-10-skills-to-land-a-lucrative-job-building-autonomous-vehicles
Melisha Dsouza
16 Oct 2018
3 min read
Save for later

Indeed lists top 10 skills to land a lucrative job, building autonomous vehicles

Melisha Dsouza
16 Oct 2018
3 min read
It is predicted that, by 2025, the car market for partially autonomous vehicles will hit 36 billion U.S. dollars. The autonomous car is expected to take the market by storm. A thriving example is the Tesla Autopilot, which has driven over 1.4 billion estimated miles as this post goes live. As the autonomous car sector grows,  skilled individuals are highly in demand for employment in this domain. Top companies have already started their hiring process to get people on board. Last week the Indeeds analytics team put together a list of companies that had job descriptions related to ‘autonomous vehicles’. Here is the list of top companies hiring for autonomous vehicle jobs: Source: Indeed.com The company to top the charts is  Aptiv. Aptiv operates out of the Detroit metro area and is focused on self-driving and connected vehicles. The company plans to add around 5,000 to 6,000 employees . Following Aptiv is NVIDIA, a company well known for making chip units. NVIDIA is responsible for making the computers that power self-driving capabilities in every Tesla vehicle. Along with Tesla, the company has also partnered with Audi to build autonomous-driving capabilities. Two of the biggest auto manufacturers based in Detroit, General Motors and Ford, are at number three and number four respectively. Both companies have shown interest and invested heavily in recent years in technology for autonomous vehicles. The rest of the list comprises of newer companies testing the waters of autonomous vehicles. Intel, surprisingly stands at number eight. Looks like this company- known for making semiconductor chips and personal computer microprocessors- is also showing a growing interest in this domain. Samsung Semiconductor also makes it to this list along with Flex. Skills needed for jobs in autonomous vehicle According to Indeed, here is the list of the top 10 skills individuals looking for a job in the self-driving car domain must possess. Source: Indeed.com As seen from the list, most of these skills are programming related. This list comes as a  surprise to automobile engineers who are not concerned with software development at all. Along with programming languages like C, C++; individuals are also expected to have a sound knowledge on Image processing and Artificial Intelligence. This is not surprising, considering that posts for AI-related roles on Indeed have almost doubled between June 2015 and June 2018. While there is no strong evidence that this sector will flourish in the future, it is clear that companies have their eye on this domain. It would be interesting to see the kind of skill set this domain encourages individuals to develop. To know more about this report, head over to Indeed.com. Tesla is building its own AI hardware for self-driving cars This self-driving car can drive in its imagination using deep reinforcement learning Baidu Apollo autonomous driving vehicles gets machine learning based auto-calibration system  
Read more
  • 0
  • 0
  • 3559

Prasad Ramesh
16 Oct 2018
3 min read
Save for later

Tesla v9 to incorporate neural networks for autopilot

Prasad Ramesh
16 Oct 2018
3 min read
Tesla v9 to incorporate neural networks for autopilot Tesla, the car maker founded by Elon Musk is incorporating larger neural networks for autopilot in the new Tesla v9. Based on the new Autopilot capabilities of version 9, it was known that the new neural net was a significant upgrade over the v8. It can now track vehicles and other objects around the car by making better use of the eight cameras around the car. Tesla motors club member Jimmy_d, a deep learning expert has shared his thoughts on v9 and the neural network used in it. Tesla has now deployed a new camera network to handle all 8 cameras. Like V8 the V9 neural network system consists of a set of ‘camera networks’ which process camera output directly. There is a separate set of ‘post processing’ networks that take output from the camera networks and turn it into higher level actionable abstractions. V9 is a pretty big change from V8. Other major changes from V8 to V9 as stated by Jimmy are: Same weight file being used for all cameras (this has pretty interesting implications and previously V8 main/narrow seems to have had separate weights for each camera) Processed resolution of 3 front cameras and back camera: 1280×960 (full camera resolution) Processed resolution of pillar and repeater cameras: 640×480 (1/2×1/2 of camera’s true resolution) all cameras: 3 color channels, 2 frames (2 frames also has very interesting implications) (was 640×416, 2 color channels, 1 frame, only main and narrow in V8) These camera changes mean a much larger neural network that require more processing power. The V9 network takes images in a resolution of 1280x960 with 3 color channels and 2 frames per camera. That’s 1280x960x3x2 as an input which is 7.3MB. The V8 main camera processing frame was 640x416x2 that is, 0.5MB. The v9 camera will have access to more details. About the network size, Jimmy said: “This V9 network is a monster, and that’s not the half of it. When you increase the number of parameters (weights) in an NN by a factor of 5 you don’t just get 5 times the capacity and need 5 times as much training data. In terms of expressive capacity increase it’s more akin to a number with 5 times as many digits. So if V8’s expressive capacity was 10, V9’s capacity is more like 100,000.” Tesla CEO Elon Musk had something to say about the estimates made by Jimmy: https://twitter.com/elonmusk/status/1052101050465808384 The amount of training data doesn’t go up by a mere 5x. It takes at least thousands and even millions of times more data to fully utilize a network that has 5x as many parameters. We will see this new neural network implementation on the road in new cars about six months down the line. For more details, you can view the discussion on the Tesla motors club website. Tesla is building its own AI hardware for self-driving cars Elon Musk reveals big plans with Neuralink DeepMind, Elon Musk, and others pledge not to build lethal AI
Read more
  • 0
  • 0
  • 3751
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-we-call-on-the-un-to-invest-in-data-driven-predictive-methods-for-promoting-peace-nature-researchers-on-the-eve-of-views-conference
Sugandha Lahoti
16 Oct 2018
4 min read
Save for later

“We call on the UN to invest in data-driven predictive methods for promoting peace”, Nature researchers on the eve of ViEWS conference

Sugandha Lahoti
16 Oct 2018
4 min read
Yesterday in an article published in Nature, the international journal of science, prominent political researchers, Weisi Guo, Kristian Gleditsch, and Alan Wilson, talked about how artificial intelligence can be used to predict outbursts of violence to potentially save lives and promote peace. This sets the stage for the ongoing two-day ViEWS conference organized by Uppsala University in Sweden, which focuses on Violence Early-Warning Systems. Per their investigation, Governments and international communities can often flag spots, that may become armed violence areas using algorithms that forecast risks. These algorithms are similar to those predicting methods used for forecasting extreme weather. These algorithms estimate the likelihood of violence by extrapolating from statistical data and analyzing text in news reports to detect tensions and military developments. Artificial intelligence is now poised to boost the power of these approaches. Some already working AI systems in this area include Lockheed Martin’s Integrated Crisis Early Warning System, the Alan Turing Institute’s project on global urban analytics for resilient defense which understands the mechanics that cause conflict and the US government’s Political Instability Task Force. The researchers believe Artificial Intelligence will help conflict models make correct predictions. This is because machine learning techniques offer more information about the wider causes of conflicts and their resolution and provide theoretical models that better reflect the complexity of social interactions and human decision-making. How AI and predictive methods could prevent conflicts The article describes how AI systems could prevent conflicts and take necessary actions to promote peace. Broadly the researchers suggest the following measures to improve conflict forecasting: Broaden data collection Reduce unknowns Develop theories Set up a global consortium Ideally, AI systems should be capable of offering explanations for violence and provide strategies for preventing it. However, this may prove to be difficult because conflict is dynamic and multi-dimensional. And the data collected presently is narrow, sparse and disparate. AI systems need to be trained to make inferences. Presently, they learn from existing data, test whether predictions hold, and then refine the algorithms accordingly. This assumes that the training data mirrors the situation being modeled which is not the scenario in the real case and sometimes makes the predictions unreliable. Another important aspect the article describes is modeling complexity. The AI system should decide where it is best to intervene for a peaceful outcome and decide how much intervention is needed. The article also urges conflict researchers to develop a universally agreed framework of theories to describe the mechanisms that cause wars. Such a framework should dictate what sort of data is collected and what needs to be forecast. They have also proposed that an international consortium should be set up to develop formal methods to model the steps society takes to wage war. The consortium should involve academic institutions, international and government bodies and industrial and charity interests in reconstruction and aid work. All research done by the members must use open data and be reproducible and have benchmarks for results. Ultimately their vision for the proposed consortium is to “set up a virtual global platform for comparing AI conflict algorithms and socio-physical models.” They concluded saying, “We hope to take the first steps to agree to a common data and modeling infrastructure at the ViEWS conference workshop on 15–16 October. “ Read the full article on the Nature journal. Google Employees Protest against the use of Artificial Intelligence in Military. ‘Employees of Microsoft’ ask Microsoft not to bid on US Military’s Project JEDI in an open letter Google opts out of Pentagon’s $10 billion JEDI cloud computing contract, as it doesn’t align with its ethical use of AI principles
Read more
  • 0
  • 0
  • 1417

article-image-employees-of-microsoft-ask-microsoft-not-to-bid-on-us-militarys-project-jedi-in-an-open-letter
Sugandha Lahoti
15 Oct 2018
4 min read
Save for later

'Employees of Microsoft' ask Microsoft not to bid on US Military’s Project JEDI in an open letter

Sugandha Lahoti
15 Oct 2018
4 min read
Last Tuesday, Microsoft announced plans to bid on the Joint Enterprise Defense Infrastructure (JEDI) contract, which is a $10 billion project to build cloud services for the Department of Defense. However, over this weekend, an account named ‘Employees of Microsoft’ on Medium has urged the company not to bid on the JEDI project in an open letter. They said, “The contract is massive in scope and shrouded in secrecy, which makes it nearly impossible to know what we as workers would be building.” At the time of writing, no further details about the ‘Employees of Microsoft’ Medium account apart from the fact that the open letter is their first post, have come to light. We are unaware of whether this account genuinely represents a section of employees at Microsoft and if they do, what number of employees have signed this open letter. No names have been attached to the open letter. Earlier this month, Google announced that they will not be competing for the Pentagon’s cloud-computing contract. They opted out of bidding for project JEDI saying the project may conflict with their principles for the ethical use of AI. In August, Oracle Corp filed a protest with the Government Accountability Office(GAO) against the JEDI cloud contract. Oracle believes that the contract should not be awarded only to a single company but instead, allow for multiple winners. DoD Chief Management Officer John H. Gibson II explained the program’s impact, saying, “We need to be very clear. This program is truly about increasing the lethality of our department.” Many Microsoft employees agree that what they build should not be used for waging war. Per the letter, “When we decided to work at Microsoft, we were doing so in the hopes of empowering every person on the planet to achieve more, not with the intent of ending lives and enhancing lethality.” They also alleged that with JEDI, Microsoft executives are on track to betray the principles of “reliability and safety, privacy and security, inclusiveness, transparency, and accountability” in exchange for short-term profits. What do Microsoft employees want? Microsoft employees have asked strong questions such as, “what are Microsoft’s A.I. Principles, especially regarding the violent application of powerful A.I. technology? How will workers, who build and maintain these services in the first place, know whether our work is being used to aid profiling, surveillance, or killing?” They want clear ethical guidelines and meaningful accountability on which uses of technology are acceptable, and which are off the table. They also want the cloud and edge solutions listed on Azure’s blog to be reviewed by Microsoft’s A.I. ethics committee, Aether. Not just that, the petitioners have also urged employees of other tech companies to take similar actions asking questions like, “how your work will be used, where it will be applied, and then act according to your principles.” Many employees within Microsoft have also voiced ethical concerns regarding the company’s ongoing contract with Immigration and Customs Enforcement (ICE). Per this contract, Microsoft provides Azure cloud computing services that have enabled ICE to enact violence and terror on families at the border and within the United States. “Despite our objections, the contract remains in place. Microsoft’s decision to pursue JEDI reiterates the need for clear ethical guidelines, accountability, transparency, and oversight.” Read the entire open letter on Medium. Google opts out of Pentagon’s $10 billion JEDI cloud computing contract, as it doesn’t align with its ethical use of AI principles. Oracle’s bid protest against U.S Defence Department’s(Pentagon) $10 billion cloud contract. Google takes steps toward better security, introduces new API policies for 3rd parties and a Titan Security system for mobile devices.
Read more
  • 0
  • 0
  • 2345

article-image-jeff-weiner-talks-about-technology-implications-on-society-unconscious-bias-and-skill-gaps-wired-25
Sugandha Lahoti
15 Oct 2018
4 min read
Save for later

Jeff Weiner talks about technology implications on society, unconscious bias, and skill gaps: Wired 25

Sugandha Lahoti
15 Oct 2018
4 min read
Last Saturday, Wired interviewed Jeff Weiner, CEO Linkedin, as a part of their 25th Anniversary celebration. He talked about the implications of technology on the modern society saying that technology amplifies tribalism. He also talked about how Linkedin keeps a tab on unconscious bias and why Americans need to develop soft skills to succeed in the coming years. Technology accentuates tribalism When asked about the implications of technology on society, Weiner said, “I think increasingly, we need to proactively ask ourselves far more difficult, challenging questions—provocative questions—about the potential unintended consequences of these technologies. And to the best of our ability, try to understand the implications for society.” This statement is justified as every week there's a top story about some company going wrong in some direction. We’re talking about the shutting down of Google+, Facebook’s security breach compromising 50M accounts, etc. He further talked about technology dramatically accelerating and reinforcing tribalism at a time when increasingly we need to be coming together as a society. He says, that one of the most important challenges for tech in the next 25 years is to  “understand the impact of technology as proactively as possible. And trying to create as much value, and trying to bring people together to the best of our ability.” Unconscious bias on Linkedin He also talked about unconscious bias as an unintended consequence of LinkedIn’s algorithms and initiatives. “It shouldn't happen that Linkedin reinforces the growing socioeconomic chasms on a global basis, especially here in the United States, by providing more and more opportunity for those that went to the right schools, worked at the right companies, and already have the right networks.” Read more: 20 lessons on bias in machine learning systems by Kate Crawford at NIPS 2017 He elaborated on how LinkedIn is addressing this unconscious bias. Linkedin’s Career Advice Hub was developed with the goal of creating economic opportunity for every member of the global workforce, last year as a response to the unconscious bias that crept into its ‘Ask For a Referral’ program. The Career Advice Hub enables any member of LinkedIn to ask for help, and for any member of LinkedIn to volunteer to help them, and to mentor them. They are also going to create economic opportunities for frontline workers, middle-skilled workers, and blue collar workers. Another focus is on knowledge workers, “who don't necessarily have the right networks or the right degrees.” Soft skills: The biggest skill gap in the U.S. Jeff also said that the biggest skills gap the United States is not coding skill but soft skills. This includes written communication, oral communication, team building, people leadership, collaboration. “For jobs like sales, sales development, business development, customer service, this is the biggest gap, and it's counter-intuitive.” Read more: 96% of developers believe developing soft skills is important Soft skills every data scientist should teach their child Soft skills are necessary because AI is still away from being able to replicate and replace human interaction and human touch. “So there's an incentive for people to develop these skills because those jobs are going to be more stable for a longer period of time.” Before you start thinking about becoming an AI scientist, you need to know how to send email, how to work a spreadsheet, how to do word processing. Jeff says, “Believe it or not, there are broad swaths of the population and the workforce that don't have those skills. And it turns out if you don't have these foundational skills if you're in a position where you need to re-skill for a more advanced technology, it becomes almost prohibitively complex to learn multiple skills at the same time.” Read the full interview on Wired. The ethical dilemmas developers working on Artificial Intelligence products must consider Consumer protection organizations submit a new data protection framework to the Senate Commerce Committee Privacy experts urge the Senate Commerce Committee for a strong federal privacy bill “that sets a floor, not a ceiling”
Read more
  • 0
  • 0
  • 2965

article-image-google-open-sources-active-question-answering-activeqa-a-reinforcement-learning-based-qa-system
Natasha Mathur
15 Oct 2018
3 min read
Save for later

Google open sources Active Question Answering (ActiveQA), a Reinforcement Learning based Q&A system

Natasha Mathur
15 Oct 2018
3 min read
Google announced last week, that it’s open-sourcing Active Question Answering (ActiveQA), a research project that involves training artificial agents for question answering using reinforcement learning. As this research project is now open source, Google has released a TensorFlow package for ActiveQA system. The latest TensorFlow ActiveQA package comprises three main components along with the code necessary to train and run the ActiveQA agent. First component is a pre-trained sequence to sequence model which takes a question as an input and returns its reformulations. Second component is an answer selection model that uses a convolutional neural network and gives a score to each triplet of the original question, reformulation, and answer. The selector makes use of the pre-trained, and publicly available word embeddings (GloVe). Third component is a question answering system (the environment) that uses BiDAF, a popular question answering system.The TensorFlow package also consists of all the code that is necessary to train and run the ActiveQA agent. “ActiveQA system.. learns to ask questions that lead to good answers. However, because training data in the form of question pairs, with an original question and a more successful variant, is not readily available, ActiveQA uses reinforcement learning, an approach to machine learning concerned with training agents so that they take actions that maximize a reward, while interacting with an environment”, reads the Google AI blog. This concept of ActiveQA was first Introduced in Google’s ICLR 2018 paper “Ask the Right Questions: Active Question Reformulation with Reinforcement Learning”. ActiveQA is far different in its approach than the traditional QA systems.Traditional QA systems make use of supervised learning techniques that are used along with labeled data to train a system. This system is capable of answering the arbitrary input questions, however,  it doesn’t come with an ability to deal with uncertainty as humans would. For instance, It is not able to reformulate the questions, issue multiple searches, and evaluate the responses. This leads to poor quality answers. ActiveQA, on the other hand, comprises an agent that consults the QA system repeatedly. This agent reformulates the original question many times which helps it select the best answer. Each of the questions reformulated is evaluated on the basis of how good the corresponding answer to that question is. If the corresponding answer is good, then the learning algorithm adjusts the model’s parameters accordingly. So, the question reformulation that led to the right answer would more likely be generated again. The ActiveQA approach allows the agent to involve in a dynamic interaction with the QA system, which leads to better quality of the returned answers. ActiveQA As per an example mentioned by Google, if you consider a question “When was Tesla born?”. The agent will reformulate the question in two different ways. One of them being “When is Tesla’s birthday” and the other one as “Which year was Tesla born”. This will help it retrieve the answers to both of the questions from the QA system. Once the systems use all this information, it collectively returns the answer as “July 10, 1856”. ActiveQA “We envision that this research will help us design systems that provide better and more interpretable answers, and hope it will help others develop systems that can interact with the world using natural language”, mentions Google. For more information, read the official Google AI blog. Google, Harvard researchers build a deep learning model to forecast earthquake aftershocks location with over 80% accuracy Google strides forward in deep learning: open sources Google Lucid to answer how neural networks make decisions Google moving towards data centers with 24/7 carbon-free energy
Read more
  • 0
  • 0
  • 5628
article-image-rawgit-the-project-that-made-sharing-and-testing-code-on-github-easy-is-shutting-down
Melisha Dsouza
15 Oct 2018
2 min read
Save for later

RawGit, the project that made sharing and testing code on GitHub easy, is shutting down!

Melisha Dsouza
15 Oct 2018
2 min read
On the 8th of October, the team over at RawGit announced that the GitHub CDN system is now in a sunset phase and will soon shut down. The project was started 5 years ago, with an intention to help users to quickly share example code or test pages in GitHub. Using RawGit one could skip all the trouble of setting up a static site to a GitHub pages branch if they temporarily needed to share their examples. It acted as a caching proxy that ensured minimal load was placed on GitHub while users obtained easy static file hosting from a GitHub repo. Why is RawGit shutting down? August 2018 saw the news of crypto miners exploiting RawGit to use files from the GitHub repository. Attackers obtain user resources through RawGit CDN abuse and make the system weak.  A user aliased as jdobt uploaded malicious files on GitHub, which were later cached using RawGit. Then using RawGit URLs, the hacker inserted the crypto jacking mining malware on sites such as WordPress and Drupal. This could be seen as a potential reason for shutting down RawGit, especially after Ryan Grove, its creator stated that  'RawGit has also become an attractive distribution mechanism for malware. Since I have almost no time to devote to fighting malware and abuse on RawGit (and since it would be no fun even if I did have the time), I feel the responsible thing to do is to shut it down. I would rather kill it than watch it be used to hurt people.' Ryan has further mentioned a few free services as an alternative to some of RawGit functionalities. These being: jsDelivr GitHub Pages CodeSandbox unpkg The GitHub repositories that have used RawGit to serve content within the last month will continue until at least October of 2019. URLs for previous repositories are no longer being served. To know more about this announcement, head over to RawGit’s official blog. 4 myths about Git and GitHub you should know about Microsoft announces Azure DevOps, makes Azure pipelines available on GitHub Marketplace GitHub’s new integration for Jira Software Cloud aims to provide teams a seamless project management 
Read more
  • 0
  • 0
  • 3026

article-image-is-att-trying-to-twist-data-privacy-legislation-to-its-own-favor
Amarabha Banerjee
15 Oct 2018
4 min read
Save for later

Is AT&T trying to twist data privacy legislation to its own favor?

Amarabha Banerjee
15 Oct 2018
4 min read
On September 26th, U.S. Senator John Thune (R-S.D.), chairman of the Senate Committee on Commerce, Science, and Transportation, summoned a hearing titled ‘Examining Safeguards for Consumer Data Privacy’. Executives from AT&T, Amazon, Google, Twitter, Apple, and Charter Communications provided their testimonies to the Committee. The hearing took place to: examine privacy policies of top technology and communications firms, review the current state of consumer data privacy, and offer members the opportunity to discuss possible approaches to safeguarding privacy more effectively. John Thune opened the meeting by saying, “This hearing will provide leading technology companies and internet service providers an opportunity to explain their approaches to privacy, how they plan to address new requirements from the European Union and California, and what Congress can do to promote clear privacy expectations without hurting innovation.” There is,however, one major problem with this approach. A hearing on consumer privacy barring any participation from the consumer side is like a meeting to discuss women safety and empowerment without any woman on the board. Why would the administration do such a thing? They might just be not ready to bring all the sides in one room. They have had a second set of hearings with privacy advocates last week. But will this really bring a change in perspective? And where are we headed?   AT&T and net neutrality One of the key issues at hand in this story is net neutrality.. For those that don’t know, this is the principle that Internet service providers should allow access to all content and applications regardless of the source, and shouldn’t be able to favor or block particular products or websites. This basically means a democratic internet. The recent law ending net neutrality across the majority of U.S. states was arguably pushed and supported by major ISPs and corporations. This makes the declaration by AT&T stating that they want to uphold user privacy rules seem farcical, like a statement made by a hunter who is about to pounce on its prey and luring them with fake consolations. As one of the leading telecom companies, AT&T has a significant stake in the online advertising and direct TV industry. The more they can track you online and record your habits, the better they can push ads and continue to milk user data without them being informed. That was their goal when they manipulated the modest FCC user data privacy guidelines last year for broadband providers before they could even take effect. Those rules largely just mandated that ISPs be transparent about what data is collected and who it's being sold to, while requiring opt in consent for particularly sensitive consumer data like your financial background. When the same company rallies for user data privacy rules and tries to burden the social media and search engine giants like Facebook, Google, Microsoft etc, then there’s a definite doubt about their actual intent. The actual reason might just be to weaken the power of major tech companies like Google, facebook and push their own agenda via their broadband network. Monopoly in any form is not an ideal scenario for users and customers. While Google and Facebook are vying for a monopoly over how users interact online everyday,  AT&T is playing a different game altogether, that of gaining control of the internet itself. Google, though, has plans of laying their own internet cable under sea - it’s going to be hard for AT&T to compete, as admirable as its ostensible hubris might be. Still, there is a decent chance that it might become a two horse race by the middle of the next decade. Of course, the ultimate impact of this sort of monopoly remains to be seen. For AT&T, the opportunity is there, even if it looks like a big challenge. Google, Amazon, AT&T met the U.S Senate Committee to discuss consumer data privacy, yesterday The U.S. Justice Department sues to block the new California Net Neutrality law California’s tough net neutrality bill passes state assembly vote
Read more
  • 0
  • 0
  • 2968

article-image-meet-prescience-the-ai-that-can-help-anesthesiologists-take-critical-decisions-in-the-or
Prasad Ramesh
12 Oct 2018
4 min read
Save for later

Meet Prescience, the AI that can help anesthesiologists take critical decisions in the OR

Prasad Ramesh
12 Oct 2018
4 min read
Before and during surgery anesthesiologists need to keep track of the anesthesia administered and the patient’s vitals. Imbalance in the level of anesthesia can cause low oxygen levels in the blood known as hypoxemia. Currently, there is no system to predict when this could happen during surgery and the patient is at the mercy of an anesthesiologist’s experience and discretion. The machine learning system called ‘Prescience’ A team of researchers from the University of Washington have come up with a system to predict if a patient is at the risk of hypoxemia. This is done using patient data like age and body mass index. Data from 50,000 surgeries was collected to train the machine learning model. The team wanted the model to solve two different kinds of problems. To look at pre-surgery patient information and predict whether a patient would have hypoxemia after anesthesia is administered. To predict the occurrence of hypoxemia at any point during the surgery by using real-time data. While predicting, for the first problem, the BMI was a crucial factor while for the second, the oxygen levels. Then, Lee and Lundberg worked on a new approach to train Prescience in a way that it would generate understandable explanations behind its predictions. Testing the model Now it was time to test Prescience. Lee and Lundberg created a web interface. It ran the anesthesiologists through cases from surgeries in the dataset that were not used to train Prescience. For the real-time test, the researchers specifically chose cases that would be hard to predict. For example, when a patient’s blood oxygen level is stable for 10 minutes and then drops. It was noted that Prescience improved the ability of doctors to correctly predict a patient’s hypoxemia risk by 16 percent before a surgery. In real-time, during a surgery it was able to predict the risk by 12 percent. With the help of Prescience, the anesthesiologists were able to correctly distinguish between the two scenarios nearly 80 percent of the time both before and during surgery. Prescience is not ready to be used in real operations yet. Lee and Lundberg plan to continue working with anesthesiologists to improve Prescience. In addition to hypoxemia, the team hopes to predict low blood pressure and recommend appropriate treatment plans with Prescience in the future. This method ‘opens the AI black box’ Although they could have successfully built a model that could predict hypoxemia, the researchers also wanted to answer the question “Why?”. A change from the traditional black box AI models engineers and researchers are used to. Lee, author of the paper said: “Modern machine-learning methods often just spit out a prediction result. They don’t explain to you what patient features contributed to that prediction. Our new method opens this black box and actually enables us to understand why two different patients might develop hypoxemia. That’s the power.” Who are the team members? The research team consisted of four people, two from medicine and two from computer science. Bala Nair, research associate professor of anesthesiology and pain medicine at the UW School of Medicine, Su-In Lee, an associate professor in the UW’s Paul G. Allen School of Computer Science & Engineering, Monica Vavilala, professor of anesthesiology and pain medicine at the UW School of Medicine and Scott Lundberg, a doctoral student in the Allen School. The system is not meant to replace doctors. You can read the research paper at Nature science journal and the University of Washington website. Swarm AI that enables swarms of radiologists, outperforms specialists or AI alone in predicting Pneumonia How to predict viral content using random forest regression in Python [Tutorial] SAP creates AI ethics guidelines and forms an advisory panel
Read more
  • 0
  • 0
  • 2253
article-image-google-renewable-energy-paper
Amey Varangaonkar
12 Oct 2018
3 min read
Save for later

Google moving towards data centers with 24/7 carbon-free energy

Amey Varangaonkar
12 Oct 2018
3 min read
It comes as no surprise to most that Google have been one of the largest buyers of renewable energy. Over 2017 alone, Google have purchased over 7 billion kilowatt-hour (kWh) from solar panels and wind farms designed especially for their electricity consumption. In light of the IPCC 6 Climate Change report which was released just a couple of days back, Google have also released a paper discussing their efforts regarding their 24/7 carbon-free energy initiative. What does the Google paper say In line with their promise of moving towards a future driven by carbon-free energy, Google’s paper discusses the steps Google are taking to reduce their carbon footprint. Key aspects discussed in this paper, aptly titled ‘Moving toward 24x7 Carbon-Free Energy at Google Data Centers: Progress and Insights’, are: Google’s framework for using 24/7 carbon-free energy How Google are currently utilizing carbon-free energy to power their data centers across different campuses situated all over the world. Finland, North Carolina, Netherlands, Iowa, and Taiwan are some of the examples where this is being achieved. Analysis of the power usage currently and how the insights derived can be used in their journey ahead Why Google is striving for adopting a carbon-free strategy Per Google, they have been carbon-neutral since 2007, and met their goal of matching all of their global energy consumption with renewable energy. Considering the scale of Google’s business and the size of their existing infrastructure, they have always been a large consumer of electricity. Google’s business expansion plans in the near future too, in turn, could have direct effects on the environmental footprint. As such, their strategy of 24/7 carbon-free energy makes complete sense. According to Google, “Pursuing this long-term objective is important for elevating carbon-free energy from being an important but limited element of the global electricity supply portfolio today, to a resource that fully powers our operations and ultimately the entire electric grid.” This is a positive and important step by Google towards building a carbon-free future with more dependence on renewable energy sources. It will also encourage other organizations of similar scale to adopt a similar approach to reduce carbon emissions. Microsoft, for example, have already pledged a 75% reduction of their carbon footprint in the environment by 2030. Oracle have also increased their solar power usage as a part of their plan to reduce their carbon emissions. Read more: Google, Amazon, AT&T met the U.S Senate Committee to discuss consumer data privacy, yesterday Google’s new Privacy Chief officer proposes a new framework for Security Regulation Ex-googler who quit Google on moral grounds writes to Senate about company’s “Unethical” China censorship plan
Read more
  • 0
  • 0
  • 3910

article-image-grafana-5-3-is-now-stable-comes-with-google-stackdriver-built-in-support-a-new-postgres-query-builder
Bhagyashree R
11 Oct 2018
3 min read
Save for later

Grafana 5.3 is now stable, comes with Google Stackdriver built-in support, a new Postgres query builder

Bhagyashree R
11 Oct 2018
3 min read
Yesterday, the Grafana team made Grafana 5.3 stable. This version comes with several enhancements and new features including built-in support for Google Stackdriver, improved TV and Kiosk mode, a new query builder for Postgres, and more. Built-in support for Google Stackdriver Grafana 5.3 provides built-in support for Google Stackdriver to enable visualizing the Stackdriver metrics in Grafana. Google Stackdriver is a monitoring service that aggregates metrics, logs, and events from infrastructure. It gives developers and operators a rich set of observable signals that speed root-cause analysis and reduce mean time to resolution (MTTR). You just have to create a GCE Service account that has access to the Stackdriver API scope. After that download the Service Account key file from Google and upload it on the Stackdriver datasource config page in Grafana and you should have a secure server-to-server authentication setup. Easily accessible TV and Kiosk Mode Now a view mode icon is displayed in the top bar to easily cycle through different view modes. Choosing the first view mode will hide the sidebar and most of the buttons in the top bar. In the second view mode, the top bar will be completely hidden and only the dashboard is visible. Notification reminders Now it is possible to set reminders so that you are continuously alerted until the problem is fixed. This is done on the notification channel itself and will affect all alerts that use that channel. Introducing a new Postgres query builder Grafana 5.3 provides a new graphical query builder for Postgres. This query builder makes it easier for both advanced users and beginners to work with time-series in Postgres. You can find it in the metrics tab in Graph or Singlestat panel’s edit mode. Improved OAuth support for Gitlab It comes with a new OAuth integration for Gitlab that enables configuration to only authenticate users that are a member of certain Gitlab groups. With this integration, you can now use Gitlab OAuth with Grafana in a shared environment without giving everyone access to Grafana Variables with free text support A new variable type named, Text box is introduced which makes it easier and more convenient to provide free text input to a variable. This new variable type will display as a free text input field with an optional pre-filled default value. Read the full changelog on Grafana’s official website and also check its GitHub repository. Predictive Analytics with AWS: A quick look at Amazon ML Apache Kafka 2.0.0 has just been released Installing and Configuring X-pack on Elasticsearch and Kibana
Read more
  • 0
  • 0
  • 2930