Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-ahead-of-redisconf-2019-redis-labs-adds-intel-optane-dc-persistent-memory-support-for-redis-enterprise-users
Amrata Joshi
03 Apr 2019
4 min read
Save for later

Ahead of RedisConf 2019, Redis Labs adds Intel Optane DC persistent memory support for Redis Enterprise users

Amrata Joshi
03 Apr 2019
4 min read
Yesterday, the team at Redis Labs, the provider of Redis Enterprise announced that its customers can now scale their datasets using Intel Optane DC persistent memory. Scaling will be offered cost-effectively at a  multi-petabyte scale, at sub-millisecond speeds. Also, the two-day RedisConf2019 (2-3 April) was held at San Francisco, yesterday, where 1500 Redis developers, innovators and contributors shared their use cases and experiences. Redis Enterprise, a linearly scalable, in-memory multi-model database, supports native and probabilistic data structures, AI, streams, document, graph, time series, and search. It has been designed and optimized to be operated in either mode of Intel’s persistent memory technology that is, Memory Mode and App Direct Mode. Redis Enterprise offers the customers flexibility for using the most effective mode to process their massive data sets quickly and cost-effectively. Intel Optane DC persistent memory is a memory technology that provides a combination of affordable large capacity and support for data persistence. Redis Labs collaborated closely with Intel throughout the development of Intel Optane DC persistent memory for providing high-performance to the Redis Enterprise database. Also, it drastically improved performance in Benchmark testing while offering huge cost savings at the same time. The benchmark testing conducted by various companies to test Intel Optane DC persistent memory, reveals that Redis Enterprise has proved that a single cluster node with a multi-terabyte dataset can support over one million operations per second at sub-millisecond latency while also serving over 80% of the requests from persistent memory. Redis Enterprise on Intel Optane DC persistent memory also offered more than 40 percent cost savings as compared to the traditional DRAM-only memory. Key features of Intel Optane DC persistent memory It optimizes in-memory databases for advanced analytics in multi-cloud environments. It reduces the wait time associated with fetching the data sets from the system storage. It also helps in transforming the content delivery networks while bringing in greater memory capacity for delivering immersive content at the intelligent edge and provides better user experiences. It provides consistent QoS (Quality of Service) levels in order to reach out to more customers while managing TCO (Total Cost of Ownership) both from hardware and operating cost levels. It also provides cost-effective solutions for customers. Intel Optane DC persistent memory provides with a persistent memory tier between DRAM and SSD that provides up to 6TBs of non-volatile memory capacity in a two-socket server and up to 1.5TB of DRAM. Moreover, it extends a standard machine’s memory capacity to 7.5TBs of byte-addressable memory (DRAM + persistent memory), while also providing persistence. This technology is available in a DIMM form factor and as a 128, 256, and 512GB persistent memory module. Alvin Richards, chief product officer at Redis Labs wrote to us in an email, “Enterprises are faced with increasingly massive datasets that require instantaneous processing across multiple data-models. With Intel Optane DC persistent memory, combining with the rich data models supported by Redis Enterprise, global enterprises can now achieve sub-millisecond latency while processing millions of operations per second with affordable server infrastructure costs.” He further added, “Through our close collaboration with Intel, Redis Enterprise on Intel Optane DC persistent memory our customers will not have to compromise on performance, scale, and budget for their multi-terabyte datasets.” Redis Enterprise is available for any cloud service or as downloadable software for hardware along with Intel Optane DC persistent memory support. To know more about Intel Optane DC persistent memory, check out the Intel’s page. Announcements at RedisConf 19 Yesterday at the RedisConf19, Redis Labs introduced two new data models and a data programmability paradigm for multi-model operation. The company made major announcements including Redis TimeSeries, RedisAI and RedisGears. RedisTimeSeries Redis TimeSeries is designed to collect and store high volume and velocity data and organize it by time intervals. It helps organizations to easily process useful data points with built-in capabilities for downsampling, aggregation, and compression. This provides organizations with the ability to query and extract data in real-time for analytics. RedisAI RedisAI eliminates the need to migrate data to and from different environments and it allows developers to apply state-of-the-art AI models to the data. RedisAI reduces processing overhead by integrating with common deep learning frameworks including TensorFlow, PyTorch, and TorchScript, and by utilizing Redis Cluster capabilities over GPU-based servers. RedisGears RedisGears, an in-database serverless engine, can operate multiple models simultaneously. It is based on the efficient Redis Cluster distributed architecture and enables infinite programmability options supporting event-driven or transaction-based operations. Today, Redis Labs will be showing how to get the most of Redis Enterprise on Intel’s persistent memory at RedisConf19. Redis Labs moves from Apache2 modified with Commons Clause to Redis Source Available License (RSAL) Redis Labs announces its annual Growth of more than 60% in the Fiscal Year 2019 Redis Labs raises $60 Million in Series E Funding led by Francisco partners
Read more
  • 0
  • 0
  • 2204

article-image-facebook-ai-open-sources-pytorch-biggraph-for-faster-embeddings-in-large-graphs
Natasha Mathur
03 Apr 2019
3 min read
Save for later

Facebook AI open-sources PyTorch-BigGraph for faster embeddings in large graphs

Natasha Mathur
03 Apr 2019
3 min read
The Facebook AI team yesterday announced, the open-sourcing of PyTorch-BigGraph (PBG), a tool that enables faster and easier production of graph embeddings for large graphs.   With PyTorch-BigGraph, anyone can take a large graph and produce high-quality embeddings with the help of a single machine or multiple machines in parallel. PBG is written in PyTorch, allowing researchers and engineers to easily swap in their own loss functions, models, and other components. Other than that, PBG can also compute the gradients and is automatically scalable. Facebook AI team states that the standard graph embedding methods don’t scale well and are not able to operate on large graphs consisting of billions of nodes and edges. Also, many graphs exceed the memory capacity of commodity servers creating problems for the embedding systems. But, PBG helps prevent that issue. PBG performs block partitioning of the graph that helps overcome the memory limitations of graph embeddings. Also, nodes are randomly divided into P partitions ensuring the two partitions fit easily in memory. The edges are then further divided into P2 buckets depending on their source and the destination node. After this partitioning, training can be performed on one bucket at a time. PBG offers two different ways to train embeddings of partitioned graph data, namely, single machine and distributed training. In a single-machine training, embeddings and edges are swapped out in case they are not being used. In distributed training, PBG uses PyTorch parallelization primitives and embeddings are distributed across the memory of multiple machines. Facebook AI team also made several modifications to the standard negative sampling, which is necessary for large graphs. “We took advantage of the linearity of the functional form to reuse a single batch of N random nodes to produce corrupted negative samples for N training edges..this allows us to train on many negative examples per true edge at a little computational cost”, says the Facebook AI team. To produce embeddings useful in different downstream tasks, Facebook AI team found an effective approach that involves corrupting edges with a mix of 50 percent nodes sampled uniformly from the nodes, and with 50 percent nodes sampled based on their number of edges. Apart from that, to analyze PBG’s performance, Facebook AI used the publicly available Freebase knowledge graph comprising more than 120 million nodes and 2.7 billion edges. A smaller subset of the Freebase graph, known as FB15k. was also used. As a result, PBG performed comparably to other state-of-the-art embedding methods for the FB15k data set. PBG was also used to train embeddings for the full Freebase graph where PBG’s partitioning scheme reduced both memory usage and training time. PBG embeddings were also evaluated for several publicly available social graph data sets and it was found that PBG outperformed all the competing methods. “We..hope that PBG will be a useful tool for smaller companies and organizations that may have large graph data sets but not the tools to apply this data to their ML applications. We hope that this encourages practitioners to release and experiment with even larger data sets”, states the Facebook AI team. For more information, check out the official Facebook AI blog. PyTorch 1.0 is here with JIT, C++ API, and new distributed packages PyTorch 1.0 preview release is production ready with torch.jit, c10d distributed library, C++ API PyTorch-based HyperLearn Statsmodels aims to implement a faster and leaner GPU Sklearn
Read more
  • 0
  • 0
  • 2976

article-image-google-rolls-out-mandatory-benefits-for-contractors-after-they-protest-for-fair-treatment-at-work
Natasha Mathur
03 Apr 2019
4 min read
Save for later

Google workers demand fair treatment for contractors; company rolls out mandatory benefits, in response, to improve working conditions

Natasha Mathur
03 Apr 2019
4 min read
Over 900 Google workers signed a letter, yesterday, urging Google to treat its contract workers fair. The contract workers at Google make up to nearly 54% of the workforce. The letter was published on Medium by the Google Walkout For Real Change group. It states that on 8th March, about 82% of the Google’s ‘Personality team of 43 members’ were informed that their existing contract term has been shortened and they will be terminated by 5th April. Personality team describes themselves as an international contract team responsible for the voice of Google Assistant across the world.  “We are the human labor that makes the Google Assistant relevant, funny, and relatable in more than 50 languages”, reads the letter. Given that the contract team consists of expats from around the world, means that many would have to make big changes in their personal life and move back to their respective homes, without any financial support. The letter states that contractors were assured by their leads that the contract would be respected, however, the onset of layoff globally at the Google offices seemed to belie that assurance. Other than this, the contractors were not informed by Google about the layoffs and termed it as a  “change in strategy”. The letter also sheds light on the discriminatory environment within Google towards its TVCs (temps, vendors, contractors). For instance, neither are the contractors offered paid holidays nor any health care. Moreover, during the layoff process, Google had asked the managers and full-time employees to distance themselves from the contractors and to not offer them any support for Google to not come under legal obligations. The letter condemns the fact that Google boasts of its ability to scale up and down with agility, stating, “the whole team thrown into financial uncertainty is what scaling down quickly looks like for Google workers. This is the human cost of agility”. The group has laid down three demands in the letter: Google should respect and uphold the existing contract. In case, the contracts were shortened, payment should be made for the remaining length of the contract. Google should respect the work of contractors and should convert them to full-time employees. Google should respect humanity. A policy should be implemented that allows FTEs (full-time employees) to openly empathize with TVCs. FTEs should be able to thank TVCs for the kind of job they’ve done. Google’s response to the letter Google responded to the letter yesterday, stating that they are improving the working conditions of TVCs. As per the new changes, by 2022, all contractors who work at least 33 hours per week for Google would receive full benefits including: comprehensive health care paid parental leave a $15 minimum wage a minimum of eight days of sick leave $5,000 per year in tuition reimbursement for workers wanting to learn new skills and courses. “These changes are significant and we're inspired by the thousands of full-time employees and TVCs who came together to make this happen”, reads the letter. However, the Personality Team is still waiting to hear back from Google on whether the company will respect the current contracts or convert them into full-time positions. https://twitter.com/GoogleWalkout/status/1113206052957433856 Eileen Naughton, VP of people operations, Google told the Hill "These are meaningful changes, and we’re starting in the U.S., where comprehensive healthcare and paid parental leave are not mandated by U.S. law. As we learn from our implementation here, we’ll identify and address areas of potential improvement in other areas of the world." Check out the official letter by Google workers here. #GooglePayoutsForAll: A digital protest against Google’s $135 million execs payout for misconduct Google confirms it paid $135 million as exit packages to senior execs accused of sexual harassment Google finally ends Forced arbitration for all its employees
Read more
  • 0
  • 0
  • 2237
Visually different images

article-image-google-employees-filed-petition-to-remove-anti-trans-anti-lgbtq-and-anti-immigrant-kay-coles-james-from-the-ai-council
Amrata Joshi
02 Apr 2019
3 min read
Save for later

Google employees filed petition to remove anti-trans, anti-LGBTQ and anti-immigrant Kay Coles James from the AI council

Amrata Joshi
02 Apr 2019
3 min read
Last week, Google announced the formation of Advanced Technology External Advisory Council, to help Google with the major issues in AI such as facial recognition and machine learning fairness. The group was announced by Kent Walker, Google's senior vice president of global affairs, and according to Kent, the council will provide diverse perspectives to Google. Google appointed eight members for the council coming from diverse fields including, behavioural economy, privacy, applied mathematics, machine learning, industrial engineering, AI ethics, digital ethics, foreign policy, and public policy. Now a group of Google employees on the selection of the council is insisting the company on the removal of, Kay Coles James, the Heritage Foundation President who promotes anti-trans and anti-immigrant thoughts. Her tweets are proof of her thoughts against the idea of LGBTQ. Heritage has even hosted a panel of anti-transgender activists, and the panel lobbied against LGBTQ discrimination protections that were proposed by congressional Democrats. https://twitter.com/KayColesJames/status/1108768455141007360 https://twitter.com/KayColesJames/status/1108365238779498497 Yesterday, a group of employees which was known as ‘Googlers Against Transphobia and Hate’ filed a petition. The petition reads, "In selecting James, Google is making clear that its version of 'ethics' values proximity to power over the wellbeing of trans people, other LGBTQ people, and immigrants. Such a position directly contravenes Google’s stated values." The petition is already been signed by more than 1k Google employees. The employees voiced their opinion in the petition, “By appointing James to the ATEAC, Google elevates and endorses her views, implying that hers is a valid perspective worthy of inclusion in its decision making, this is unacceptable.” Few researchers and civil society activists have joined in the force against the idea of anti-trans and anti-LGBTQ.  Alessandro Acquisti, a behavioral economist and privacy researcher, has declined an invitation to join the council. https://twitter.com/ssnstudy/status/1112099054551515138 Google employees and researchers wrote that appointing James to the council "significantly undermines Google’s position on AI ethics and fairness” pointing out that there have been consistent civil rights concerns around some AI technology. The petition further reads, "Not only are James’ views counter to Google’s stated values, but they are directly counter to the project of ensuring that the development and application of AI prioritizes justice over profit.” According to a few people, James’ views are uncommon and they are taking a stand for her. Cal Smith, on Medium wrote, “Her views are not uncommon, and in fact are shared by a good percentage of Americans. If you are to have a truly representative AI that prioritizes non-discrimination then you must have a wide range of views included, including those you disagree with.” It seems the petition by Google employees will definitely put some pressure over the company, considering that the intention is more about strengthening the Human Rights than anything else. But it is yet to be known what Google finally decides! Check out the letter by the Google employees here. Is Google trying to ethics-wash its decisions with its new Advanced Tech External Advisory Council? European Union fined Google 1.49 billion euros for antitrust violations in online advertising Google Podcasts is transcribing full podcast episodes for improving search results  
Read more
  • 0
  • 0
  • 2600

article-image-researchers-trick-tesla-autopilot-into-driving-into-opposing-traffic
Fatema Patrawala
02 Apr 2019
4 min read
Save for later

Researchers successfully trick Tesla autopilot into driving into opposing traffic via “small stickers as interference patches on the ground”

Fatema Patrawala
02 Apr 2019
4 min read
Progress in the field of machine vision is one of the most important factors in the rise of the self-driving car. An autonomous vehicle has to be able to sense its environment and react appropriately. Free space has to be calculated, solid objects avoided, and all of the instructions  painted on the tarmac or posted on signs have to be obeyed. Deep neural networks turned out to be pretty good at classifying images, but it's still worth remembering that the process is quite unlike the way humans identify images, even if the end results are fairly similar. Researchers from Tencent Keen Security Lab have published a report detailing their successful attacks on Tesla firmware. It includes remote control over the steering, and an adversarial example attack on the autopilot that confuses the car into driving into the oncoming traffic lane. The researchers used an attack chain that they disclosed to Tesla, and which Tesla now claims has been eliminated with recent patches. To effect the remote steering attack, the researchers had to bypass several redundant layers of protection. But having done this, they were able to write an app that would let them connect a video-game controller to a mobile device and then steer a target vehicle, overriding the actual steering wheel in the car as well as the autopilot systems. This attack has some limitations: while a car in Park or traveling at high speed on Cruise Control can be taken over completely, a car that has recently shifted from R to D can only be remote controlled at speeds up to 8km/h. Tesla vehicles use a variety of neural networks for autopilot and other functions (such as detecting rain on the windscreen and switching on the wipers); the researchers were able to use adversarial examples (small, mostly human-imperceptible changes that cause machine learning systems to make gross, out-of-proportion errors) to attack these. Most dramatically, the researchers attacked the autopilot lane-detection systems. By adding noise to lane-markings, they were able to fool the autopilot into losing the lanes altogether, however, the patches they had to apply to the lane-markings would not be hard for humans to spot. Much more seriously, they were able to use "small stickers" on the ground to effect a "fake lane attack" that fooled the autopilot into steering into the opposite lanes where oncoming traffic would be moving. This worked even when the targeted vehicle was operating in daylight without snow, dust or other interference. Misleading the autopilot vehicle to the wrong direction with some patches made by a malicious attacker, in sometimes, is more dangerous than making it fail to recognize the lane. The researchers painted three inconspicuous tiny square in the picture took from camera, and the vision module would recognize it as a lane with a high degree of confidence as below shows… After that they tried to build such a scene in the physical world: pasted some small stickers as interference patches on the ground in an intersection. They used these patches to guide the Tesla vehicle in the Autosteer mode driving to the reverse lane. The test scenario like Fig 34 shows, red dashes are the stickers, the vehicle would regard them as the continuation of its right lane, and ignore the real left lane opposite the intersection. When it travels to the middle of the intersection, it would take the real left lane as its right lane and drive into the reverse lane. Tesla autopilot module’s lane recognition function has a good robustness in an ordinary external environment (no strong light, rain, snow, sand and dust interference), but it still doesn’t handle the situation correctly in the test scenario. This kind of attack is simple to deploy, and the materials are easy to obtain. As talked in the previous introduction of Tesla’s lane recognition function, Tesla uses a pure computer vision solution for lane recognition, and as found in this attack experiment that the vehicle driving decision is only based on computer vision lane recognition results. The experiments proved that this architecture has security risks and reverse lane recognition is one of the necessary functions for autonomous driving in non-closed roads. In the scene built, if the vehicle knows that the fake lane is pointing to the reverse lane, it should ignore this fake lane and then it could avoid a traffic accident. Tesla is building its own AI hardware for self-driving cars Tesla v9 to incorporate neural networks for autopilot Aurora, a self-driving startup, secures $530 million in funding from Amazon, Sequoia, and T. Rowe Price among others
Read more
  • 0
  • 0
  • 2756

article-image-elasticsearch-7-0-rc1-releases-with-new-allocation-and-security-features
Natasha Mathur
01 Apr 2019
2 min read
Save for later

Elasticsearch 7.0 rc1 releases with new allocation and security features

Natasha Mathur
01 Apr 2019
2 min read
The Elastic team released version 7.0 rc1 of its open source distributed, RESTful search and analytics engine, Elasticsearch, last week. Elasticsearch 7.0 rc1 explores new features, breaking changes, deprecations, and bug fixes among others. New features Allocation: There’s a new Node repurpose tool in Elasticsearch 7.0 rc1. Security: Internal security index has been switched to “.security-7”. Breaking Changes Distributed: Cluster state size has been removed. Features: Migration upgrade and Assitance APIs are removed. Enhancements Retention lease sync intervals have been reduced. Retention leases have been integrated to recovery from remote. Dedicated retention lease exceptions have been added. Support has been added for selecting percolator query candidate matches in elasticsearch 7.0 rc1 that contains geo_point based queries. Bug Fixes and deprecations Size has been deprecated in cluster state response Fallback has been deprecated to java on PATH The issue of sibling pipeline aggregators reduction during non-final reduce has been solved. nextDoc has been extended to delegate to the wrapped doc-value iterator for date_nanos. Non-super users are now allowed to create API keys. A consistent view of realms is now used for authentication. Reading auto-follow patterns from x-content has been enabled in Elasticsearch 7.0 rc1. auto-followers have been stopped on shutdown. Node tool cleanup has been fixed. Serializing state will be now avoided in case it is already serialized. waitForActiveShards will be ignored when syncing leases. multi_value_field_leniency has been added inside FieldHitExtractor. For more information, check out the official Elasticsearch 7.0 rc1 release notes. Elasticsearch 6.5 is here with cross-cluster replication and JDK 11 support Search company Elastic goes public and doubles its value on day 1 How does Elasticsearch work? [Tutorial]
Read more
  • 0
  • 0
  • 2245
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-ibm-sued-by-former-employees-on-violating-age-discrimination-laws-in-workplace
Fatema Patrawala
01 Apr 2019
5 min read
Save for later

IBM sued by former employees on violating age discrimination laws in workplace

Fatema Patrawala
01 Apr 2019
5 min read
Technology giant IBM has been sued by 4 former employees of IBM Corporation in federal court of New York for violating laws prohibiting age discrimination in the workplace: the Older Workers Benefit Protection Act (OWBPA) and the Age Discrimination in Employment Act (ADEA). The suit alleges IBM downsized its senior employees for negative performance reviews so it could oust them from the company as it formed a “Millennial Corps” and focused on hiring “early professionals,” the lawsuit claims. The employees who filed the lawsuit against IBM are Steven Estle, Margaret Ahlders, Lance Salonia and Cheryl Witmer. They alleged that the company in 2014 began downgrading their annual performance scores, and they started receiving worse evaluations than in previous years. When they were fired in 2016, IBM falsely characterized their departures as retirements, the suit claimed. “I did my job very well and received glowing remarks on my annual evaluations for 33 years,” said Cheryl Witmer, a plaintiff in the complaint who was terminated as part of a Resource Action in May 2016 at age 57. “Suddenly in my 34th year, I was unfairly downgraded in my annual evaluation. Nothing about my work changed; what changed is that IBM decided to replace me with a much younger worker.” “In the past six years alone, IBM has discharged over 20,000 U.S. employees who were at least 40 years old in pursuit of a company-wide practice of using forced group terminations, referred to as ‘Resource Actions,’ to accomplish its goal of removing older employees from its labor force,” said the lawsuit filed by the former IBM employees. Three have worked at the company for more than three decades, and one for more than 10 years, the suit said. All were over 55 when they were sacked in May 2016, according to the suit. The suit alleged that IBM required employees to submit claims of age discrimination to binding arbitration, but also banned them from collective arbitration over such claims. IBM said in an emailed statement that the plaintiffs’ theories have been rejected by courts including the U.S. Supreme Court. “We are confident that our arbitration clauses are legal and appropriate,” the firm said. The company added that a body of Supreme Court cases upholds arbitration agreements. The purported purge started in 2014, with the firm carrying out a plan to fix its “seniority mix” by imposing an “aggressive performance management posture,” the suit filed in federal court in New York alleged. One in-house presentation showed that this posture meant doubling the proportion of workers receiving negative performance evaluations, so 3,000 employees could be laid off and replaced with “early professionals,” according to the suit. “In 2015 and 2016, IBM doubled down on its efforts to replace its long-tenured, older employees with the younger Millennials it sought to recruit,” the suit alleged. “IBM made presentations to its senior executives calling for IBM to evaluate its long-term employees more harshly, to use those negative evaluations to justify selecting long-term employees for lay-off, and to replace these employees with ‘EPs’– IBM management short-hand for ‘early professionals.’” A 2016 presentation concerning one section of the company “specifically called for managers to exempt all ‘early professional hires’ from layoff, regardless of performance,” the suit claimed. “The long-serving, older employees were provided no such exemption.” Also starting in 2014, IBM began demanding that laid-off workers waive their right to collective action, the suit alleged. Employees were offered severance worth a month’s salary, continuing health and life insurance coverage for a period depending on time with the firm, free career counseling, and up to $2,500 for skills training, the suit said. But workers would not receive any of those benefits if they didn’t sign an agreement not to bring age-discrimination claims collectively, even in arbitration, the suit claimed. By this arrangement, IBM sought to deprive workers of the economies and advantages of pursuing legal action together, and “instead to burden them with the limitations and costs of bringing individual actions challenging the same discriminatory practices in secret arbitrations separate from each other,” the suit alleged. “With misgivings, but facing the prospect of a difficult job search and economic hardship, each Plaintiff reluctantly signed the waiver,” the suit said. The suit took aim at a 2006 IBM internal report on employee demographics that purportedly called older workers “gray hairs” and “old heads,” and concluded that younger workers were “generally much more innovative and receptive to technology than baby boomers.” In 2014,  IBM made no secret that it was shifting its resources and focus to targeting a much younger demographic. For example, the company launched a blog, “The Millennial Experience,” and a social media campaign led by the hashtag #IBMillennial.” The suit cited a presentation given at a 2014 IBM event, in which slides shown allegedly indicated that Millennials exhibited desirable work qualities such as trusting data and making decisions through collaboration, while workers over 50 had undesirable attributes such as being “more dubious” of analytics, putting “less stock in data” and being less motivated to consult colleagues. The complaint comes after a bombshell story last year in March by ProPublica, “Cutting ‘Old Heads’ at IBM,” which exposed a company-wide pattern of age discrimination practices spanning many units and geographical locations. In addition to seeking to invalidate the illegal waiver of their rights under the ADEA, Plaintiffs will seek the issuance of notice to all other similarly situated laid off older employees who were coerced into signing the invalid releases. They are also seeking the certification of their collective claims under the ADEA, as well as declaratory, equitable, and monetary relief. Copies of the lawsuit and arbitration complaints are available through these links. IBM, Oracle under the scanner again for questionable hiring and firing policies Diversity in Faces: IBM Research’s new dataset to help build facial recognition systems that are fair IBM announces the launch of Blockchain World Wire, a global blockchain network for cross-border payments
Read more
  • 0
  • 0
  • 2749

article-image-winners-for-the-2019-net-foundation-board-of-directors-elections-are-finally-declared
Amrata Joshi
29 Mar 2019
4 min read
Save for later

Winners for the 2019 .NET Foundation Board of Directors elections are finally declared

Amrata Joshi
29 Mar 2019
4 min read
The result for the .NET Foundation Board of Directors 2019 is finally revealed. Out of the 476 voters, 329 casted ballots in this election. After counting the ballots using Scottish STV (Single Transferable Vote), Jon Skeet, Sara Chipps, Phil Haack, Iris Classon, Ben Adams, Oren Novotny, and Beth Massi were declared as winners. In total there were 45 candidates competing for 6 seats. Beth Massi has been appointed by Microsoft while the rest got elected by .NET Foundation Members. Following are the winner profiles Jon Skeet: A Java developer at Google in London and is also C# author and community leader. https://twitter.com/jonskeet/status/1111540160305475584 Sara Chipps: Engineering Manager at Stack Overflow. https://twitter.com/SaraJChipps/status/1111458522418552835 Phill Hacck: A developer and author, and best known for his blog, Haacked. https://twitter.com/haacked/status/1111493618441703427 Iris Classon: Software developer, cloud architect at Konstrukt. She is also a member of MEET (Microsoft Extended Experts Team) Ben Adams: Co-founder and CTO of Illyriad Games. https://twitter.com/jongalloway/status/1111324076981682176 Oren Novotny: Microsoft’s Regional Director, MVP, and chief architect of DevOps & modern software at Insight. https://twitter.com/onovotny/status/1111410983749115905 Beth Massi: Product Marketing Manager for the .NET Platform at Microsoft and has previously worked for the .NET Foundation in 2014. https://twitter.com/BethMassi/status/1108838511069716480 How did the election process go The candidate's votes for a round are calculated by taking the sum of the votes from the previous round and votes received in the current round. The votes received in the current round and votes transferred away in the current round represent “votes being transferred”. The single transferable vote system was opted because it is a type of ranked-choice voting which is used for electing a group of candidates, for instance, a committee or a council. In this type of voting, the votes are transferred from losing candidates to other choices in the ballot. Round 1 The first round considered the count of first choices. Since none of the candidates had surplus votes so the candidates who received the least number of votes or no votes at all got eliminated and votes for other candidates got transferred for the next round. Round 2 Round 2 calculated the count after eliminating Lea Wegner and Robin Krom who received 0 votes. There was a tie between, Lea Wegner and Robin Krom while choosing candidates to eliminate. Though Lea Wegner was later chosen by breaking the tie randomly. Since none of the candidates had surplus votes they got transferred for the next round. Round 3 Round 3 calculated the count after eliminating Robin Krom and transferring votes. Since none of the candidates had surplus votes they got transferred for the next round. Round 4 Round 4 calculated the count after eliminating Nate Barbettini and transferring votes. There was a tie between the candidates Peter Mbanugo, Robert McLaws, Virgile Bello, Nate Barbettini, and Marc Bruins while choosing candidates to eliminate. In this round candidate, Nate Barbettini was chosen by breaking the tie randomly. Since none of the candidates had surplus votes they got transferred for the next round. Round 5 The fifth round considered the count after eliminating Marc Bruins and transferring votes. There was a tie between the candidates, Peter Mbanugo, Robert McLaws, Virgile Bello, and Marc Bruins while choosing the candidates to eliminate. Out of which, Marc Bruins was chosen by breaking the tie randomly. Since none of the candidates had surplus votes they got transferred for the next round. Collectively there were 41 such rounds where each round was an elimination round and then finally the winners were declared. To know more about this news, check out Opavote’s blog post. Fedora 31 will now come with Mono 5 to offer open-source .NET support Inspecting APIs in ASP.NET Core [Tutorial] .NET Core 3 Preview 2 is here!
Read more
  • 0
  • 0
  • 1722

article-image-developers-lives-matter-chinese-developers-protest-over-the-996-work-schedule-on-github
Natasha Mathur
29 Mar 2019
3 min read
Save for later

'Developers' lives matter': Chinese developers protest over the “996 work schedule” on GitHub

Natasha Mathur
29 Mar 2019
3 min read
Working long hours at a company, devoid of any work-life balance, is rife in China’s tech industry. Earlier this week on Tuesday, a Github user with the name “996icu” created a webpage that he shared on GitHub, to protest against the “996” work culture in Chinese tech companies. The “996” work culture is an unofficial work schedule that requires employees to work from 9 am to 9 pm, 6 days a week, totaling up to 60 hours of work per week. The 99icu webpage mentions the Labor Law of the People’s Republic of China, according to which, an employer can ask its employees to work long hours due to needs of production or businesses. But, the work time to be prolonged should not exceed 36 hours a month. Also, as per the Labor Law, employees following the "996" work schedule should be paid 2.275 times of their base salary. However, this is not the case in reality and Chinese employees following the 996 work rule rarely get paid that much. GitHub users also called out to companies like Youzan and Jingdong, who both follow the 996 work rule. The webpage cites example of a Jingdong PR who posted on their maimai ( Chinese business social network) account that "(Our culture is to devote ourselves with all our hearts (to achieve the business objectives)". 996 work schedule started to gain popularity in recent years but has been a “secret practice” for quite a while. The 996icu webpage went viral online and ranked first on GitHub’s trending page on Thursday. It currently has amassed more than 90,000 stars (a post bookmarking tool). The post is also being widely shared on Chinese social media platforms such as Weibo and WeChat, where many users are talking about their experiences as tech workers who followed the 996 schedule. This gladiatorial work environment in Chinese firms has long been a bone of contention. South China Morning Post writer Zheping Huang published a post sharing stories of different Chinese tech employees who shed light on the grotesque reality of China’s Silicon Valley. One such example is of a 33-year-old Beijing native, Yang, who works as a product manager in a Chinese internet company. Yang wakes up at 6 am every day to get through a two-and-a-half-hour commute to reach work. Another example is of Bu, a 20-something marketing specialist who relocated to an old complex near her workplace. She pays high rent, shares room with two other women, and no longer has access to coffee shops or good restaurants. A user named “discordance” on Hacker News commented regarding the GitHub protest, asking developers in China to move to better companies. “Leave your company, take your colleagues and start one with better conditions. You are some of the best engineers I've worked with and deserve better”. Another user “ceohockey60”  commented: “The Chinese colloquial term for a developer is "码农". Its literal English translation is "code peasants" -- not the most flattering or respectful way to call software engineers. I've recently heard horror stories, where 9-9-6 is no longer enough inside one of the Chinese tech giants, and 10-10-7 is expected (10am-10pm, 7 days/week)”. The 996icu webpage states that people who “consistently follow the "996" work schedule.. run the risk of getting..into the Intensive Care Unit. Developers' lives matter”. What the US-China tech and AI arms race means for the world – Frederick Kempe at Davos 2019 China’s Huawei technologies accused of stealing Apple’s trade secrets, reports The Information Is China’s facial recognition powered airport kiosks an attempt to invade privacy via an easy flight experience
Read more
  • 0
  • 0
  • 5804

article-image-ahead-of-eu-2019-elections-facebook-expands-its-ad-library-to-provide-advertising-transparency-in-all-active-ads
Sugandha Lahoti
29 Mar 2019
4 min read
Save for later

Ahead of EU 2019 elections, Facebook expands its Ad Library to provide advertising transparency in all active ads

Sugandha Lahoti
29 Mar 2019
4 min read
On Thursday, Facebook rolled out a new Ad Library to provide more stringent transparency for preventing interference in worldwide elections. Previously, Facebook’s ads were used to try to influence the 2016 U.S. presidential elections. The Ad library will provide information on all active ads running on a Facebook page, including politics or issue ads. A previous version of this library called Ads Archive only included ads related to politics or policy issues. Anyone can explore the Library, with or without a Facebook account. A day before Facebook’s Ad Library launch, Mozilla with a group of 10 independent researchers published five guidelines that Facebook and Google’s ad transparency APIs should meet in to ensure complete elections protection. These guidelines were shared publicly with European Commissioners Mariya Gabriel, Julian King, Andrus Ansip, and Vera Jourova. The guidelines were also shared with Facebook and Google. Facebook’s Ad Library, more or less meets the requirements of Mozilla. However, the API itself and any data collected from the API is not accessible to and shareable with the general public. Only those who have passed Facebook Identity Confirmation process will be allowed to access the API. Also, Mozilla mentioned the availability of advertisements going back 10 years, but Facebook will provide data going back 7 years. Weekly, monthly, and quarterly reports are downloadable for anyone. How does Ad Library work? Ad Library will provide information on who saw the ad, how much money the buyer spent to run it, and the number of impressions it received. This information about ads will be provided for seven years. Users will now be able to search by Page, not just keywords and for Facebook logged in users, past user searches will be saved. People can also report ads from within the ad library. The Library will include additional information about the Pages where the ads appeared, including: Page creation date, previous Page merges, and name changes. Primary country location of people who manage a Page provided it has a large audience or runs ads related to politics or issues in select countries. Advertiser spend information for ads related to politics or issues where the Ad Library Report is currently available. This includes all-time spend and spends over the last week, which was previously only available in the Ad Library Report. Starting in mid-May, Facebook says it will update the Ad Library Report on politics- and issues-related ads daily, rather than just weekly or monthly. The company is also expanding access to the Ad Library API to analyze ads related to politics or issues for a wider group of researchers. For the EU Parliamentary elections Ahead of the European Parliamentary election in May 2019, Facebook is also introducing ads transparency tools in the EU. These tools, Facebook said, would have two major goals. “Preventing online advertising from being used for foreign interference, and increasing transparency around all forms of political and issue advertising.” Per these new tools, EU advertisers will need to be authorized in their country to run ads related to the European Parliamentary election or issues of importance within the EU. The company will be using a combination of automated systems and user reporting to enforce this policy. They also need to provide a “Paid for by” disclaimer clearly communicating who is responsible for the ad. On clicking the disclaimer, information such as the campaign budget associated with an individual ad, how many people saw it and their age, location, and gender, will also be displayed. Satvik Shukla, Product Manager at Facebook wrote in a blog post, “We’re committed to creating a new standard of transparency and authenticity for advertising. By the end of June, we’ll roll out transparency tools for political or issue ads around the world.” Although, Facebook’s battle with ad transparency is still facing troubles on a different end. Yesterday, Facebook was charged with housing discrimination by the U.S. Department of Housing and Urban Development. The department alleged that Facebook’s targeted advertising platform violates the Fair Housing Act, “encouraging, enabling, and causing” unlawful discrimination by restricting who can view housing ads. HUD had previously also alerted Twitter and Google last year that it is monitoring their practices for similar violations. Facebook takes an initiative against discriminative ads on its platform Facebook deletes and then restores Warren’s campaign ads after she announced plans to break up Facebook. Open letter from Mozilla Foundation and other companies to Facebook urging transparency in political ads.
Read more
  • 0
  • 0
  • 2414
article-image-acm-honors-the-three-pioneers-in-artificial-intelligence-with-1-million-turing-award-for-2018
Natasha Mathur
28 Mar 2019
2 min read
Save for later

ACM honors the three Pioneers in Artificial Intelligence with $1 million Turing Award for 2018

Natasha Mathur
28 Mar 2019
2 min read
The Association for Computing Machinery (ACM) announced Yoshua Bengio, Geoffrey Hinton, and Yann LeCun, the three pioneers in Artificial Intelligence, as winners of the 2018 Turing Award. The Turing Award was presented to the researchers for their ‘conceptual and engineering breakthroughs’ that resulted in deep neural networks become a critical component of computing. https://twitter.com/ylecun/status/1110851884624035852 https://twitter.com/geoffreyhinton/status/1110962177903640582 https://twitter.com/AndrewYNg/status/1110913633758769158 The ACM Turing Award, named after the great Alan M. Turing, a British Mathematician, is often referred to as the “Nobel Prize of Computing”. The award brings along with it a $1 million prize which will be split between the winners. Financial support is being offered by Google. ACM states that Hinton, LeCun, and Bengio, worked independently and together to develop conceptual foundations for the field. These researchers worked diligently to identify surprising phenomena via various experiments and also contributed engineering advances that effectively shows the practical advantages of deep neural networks. These deep learning methods have led to many astonishing breakthroughs in the fields of computer vision, speech recognition, natural language processing, and robotics among others. LeCun, Hinton, and Bengio stayed committed to the approach of using artificial neural networks as a tool to help computers recognize patterns and simulate human intelligence. Researchers also faced much criticism initially and their ideas were often met with skepticism. But, the researchers were determined and their ideas have resulted in major technological advances. “At the heart of this progress are fundamental techniques developed starting more than 30 years ago by this year's Turing Award winners, Yoshua Bengio, Geoffrey Hinton, and Yann LeCun”, said Jeff Dean, Google Senior Fellow, and SVP, Google AI. Dr. Hinton now works as VP and engineering fellow at Google; Dr. LeCun works as the Chief AI Scientist for Facebook, and Dr. Bengio has inked deals with IBM and Microsoft. NGI0 Consortium to award grants worth 5.6 million euro to open internet projects UC Davis students bag $500k award and the 2018 Amazon Alexa prize for creating a social conversational system Mozilla funds winners of the 2018 Creative Media Awards for highlighting unintended consequences of AI in Soceity
Read more
  • 0
  • 0
  • 2524

article-image-facebook-ban-white-nationalism-separatism-white-supremacy-content
Fatema Patrawala
28 Mar 2019
6 min read
Save for later

Facebook will ban white nationalism, and separatism content in addition to white supremacy content

Fatema Patrawala
28 Mar 2019
6 min read
Yesterday Facebook rolled out a policy to ban white nationalist content from its platforms. This seems to be a significant step towards the longstanding demands from civil rights groups who said the tech giant was failing to confront the powerful reach of white extremism on social media. The threat posed by social media enabling white nationalism was violently underlined this month when a racist gunman killed 50 people at two mosques in New Zealand, using Facebook and other social media platforms to post live video of the attack. Facebook removed the video and the gunman’s account soon after but the footage was already widely shared on Facebook, YouTube, Twitter, Reddit and 8chan website. In a blog post titled “Standing Against Hate,” that Facebook posted on Wednesday, the company said the ban takes effect next week. As of midday Wednesday, the feature did not yet appear to be live, based on searches for terms like “white nationalist,” “white nationalist groups,” and “blood and soil.” As part of its policy change, Facebook said it would divert users who searched for white supremacist content to Life After Hate, a nonprofit that helps people leave hate groups, and would improve its ability to use artificial intelligence and machine learning to combat white nationalism. Based on information in Motherboard’s report, the platform will use content-matching to delete images previously flagged as hate speech. There was no further elaboration on how that would work, including whether or not URLs to websites like 4chan and 8chan would be affected by the ban. Facebook will not differentiate between white nationalism, white separatism and white supremacy The company had previously banned white supremacist content from its platforms but maintained a murky distinction between white supremacy, white nationalism and white separatism. On Wednesday, it said that its views had been changed by civil society groups and experts in race relations and that it now believed “white nationalism and separatism cannot be meaningfully separated from white supremacy and organized hate groups.” Kristen Clarke, the president of the Lawyers’ Committee for Civil Rights Under Law, which helped Facebook shape its new attitude toward white nationalism, said the earlier policy “left a gaping hole in terms of what it provided for white supremacists to fully pursue their platform.” “Online hate must be confronted if we are going to make meaningful progress in the fight against hate, so this is a really significant victory,” Ms. Clarke said. “It’s clear that these concepts are deeply linked to organized hate groups and have no place on our services,” Facebook said in a statement posted online. It later added, “Going forward, while people will still be able to demonstrate pride in their ethnic heritage, we will not tolerate praise or support for white nationalism and separatism.” “Our policies have long prohibited hateful treatment of people based on characteristics such as race, ethnicity or religion — and that has always included white supremacy,” the company said in a statement. “We didn’t originally apply the same rationale to expressions of white nationalism and separatism because we were thinking about broader concepts of nationalism and separatism — things like American pride and Basque separatism, which are an important part of people’s identity.” The civil rights groups welcome this ban but wait for implementation before approving Facebook’s move Facebook’s decision was praised by civil rights groups and experts in the study of extremism, many of whom had strongly disapproved of the company’s previous understanding of white nationalism. Madihha Ahussain, a lawyer for Muslim Advocates, a civil-rights group, said the policy change was “a welcome development” in the wake of the New Zealand mosque shootings. But she said the company still had to explain how it will enforce the policy, including how it will determine what constitutes white nationalist content. “We need to know how Facebook will define white nationalist and white separatist content,” she said. “For example, will it include expressions of anti-Muslim, anti-Black, anti-Jewish, anti-immigrant and anti-LGBTQ sentiment — all underlying foundations of white nationalism? Further, if the policy lacks robust, informed and assertive enforcement, it will continue to leave vulnerable communities at the mercy of hate groups.” Mark Pitcavage, who tracks domestic extremism for the Anti-Defamation League, said the shift from Facebook was “a good thing if they were using such a narrow definition before.” Mr. Pitcavage said the term white nationalism “had always been used as a euphemism for white supremacy, and today it is still used as a euphemism for white supremacy.” He called the two terms “identically extreme.” He said white supremacists began using the term “white nationalist” after the civil rights movement of the 1960s, when the term “white supremacy” began to receive sustained scorn from mainstream society, including among white people. “The less hard-core white supremacists stopped using any term for themselves, but the more hard-core white supremacists started using ‘white nationalism’ as a euphemism for ‘white supremacy,’” he said. And he said comparisons between white nationalism and American patriotism or ethnic pride were misplaced. “Whiteness is not an ethnicity, it is a skin color,” Mr. Pitcavage said. “And America is a multicultural society. White nationalism is simply a form of white supremacy. It is an ideology centered on hate.” Progressive nonprofit civil rights advocacy group, Color of Change called Facebook’s new moderation policy a critical step forward. “Color Of Change alerted Facebook years ago to the growing dangers of white nationalists on its platform, and today, we are glad to see the company’s leadership take this critical step forward in updating its policy on white nationalism,” the statement reads. “We look forward to continuing our work with Facebook to ensure that the platform’s content moderation guidelines and training properly support the updated policy and are informed by civil rights and racial justice organizations.” How social media enabled and amplified the Christchurch terrorist attack Google and Facebook working hard to clean image after the media backlash from the Christchurch terrorist attack Facebook under criminal investigations for data sharing deals: NYT report
Read more
  • 0
  • 0
  • 2614

article-image-microsoft-adobe-and-sap-share-new-details-about-the-open-data-initiative
Natasha Mathur
28 Mar 2019
3 min read
Save for later

Microsoft, Adobe, and SAP share new details about the Open Data Initiative

Natasha Mathur
28 Mar 2019
3 min read
Earlier this week at the Adobe Summit, world’s largest conference focused on Customer Experience Management, Microsoft, Adobe and SAP announced that they’re expanding their Open Data Initiative. CEOs of Microsoft, Adobe, and SAP announced the launch of the Open Data Initiative at the Microsoft Ignite Conference in 2018. The core idea behind Open Data Initiative is to make it easier for the customers to move data between each others’ services. Now, the three partners are looking forward to transforming customer experiences with the help of real-time insights that will be delivered via the cloud. They have also come out with a common approach and a set of resources for customers to help customers create new connections across previously siloed data. Read Also: Women win all open board director seats in Open Source Initiative 2019 board elections “From the beginning, the ODI has been focused on enhancing interoperability between the applications and platforms of the three partners through a common data model with data stored in a customer-chosen data lake”, reads the Microsoft announcement. This unified data lake offers customers their choice of development tools and applications to build and deploy services. Also, these companies have come out with a new approach for publishing, enriching and ingesting initial data feeds from Adobe Experience Platform into a customer’s data lake. The whole approach will be activated via Adobe Experience Cloud, Microsoft Dynamics 365, Office 365 and SAP C/4HANA. This, in turn, will provide a new level of AI enrichment, helping firms serve their customers better. Moreover, to further advance the development of the initiative, Adobe, Microsoft and SAP, also shared the details about their plans to summon a Partner Advisory Council. This Partner Advisory Council will comprise over a dozen firms including Accenture, Amadeus, Capgemini, Change Healthcare, Cognizant, etc. Microsoft states that these organizations believe there is a significant opportunity in the ODI to help them offer altogether new value to their customers. “We’re excited about the initiative Adobe, Microsoft and SAP have taken in this area, and we see a lot of opportunity to contribute to the development of ODI”, states Stephan Pretorius, CTO, WPP. Microsoft introduces Pyright, a static type checker for the Python language written in TypeScript Microsoft announces: Microsoft Defender ATP for Mac, a fully automated DNA data storage, and revived office assistant Clippy Microsoft brings PostgreSQL extension and SQL Notebooks functionality to Azure Data Studio
Read more
  • 0
  • 0
  • 1912
article-image-shodan-monitor-a-new-website-that-monitors-the-network-and-tracks-what-is-connected-to-the-internet
Amrata Joshi
28 Mar 2019
2 min read
Save for later

Shodan Monitor, a new website that monitors the network and tracks what is connected to the internet

Amrata Joshi
28 Mar 2019
2 min read
Just two days ago, the team at Shodan introduced Shodan Monitor, a new website that helps users to setup network alerts and keeps a track of what's connected to the internet. Features of Shodan Monitor Networking gets easy with Shodan Monitor Users will be able to explore what they have connected to the internet within their network range. The users can also set up real-time notifications in case something unexpected shows up. Scaling The Shodan platform can handle networks of all the sizes. In case an ISP wants to deal with millions of customers then Shodan could be reliable in that scenario. Security Shodan Monitor helps in monitoring the users’ known networks and their devices across the internet. It helps in detecting leaks to the cloud, identifying phishing websites and compromised databases. Shodan navigates users to important information Shodan Monitor helps in keeping the dashboards precise and relevant by proving the most relevant information with the help of their web crawlers. The information shown to the users on their dashboards gets filtered before getting displayed to them. Component details API Shodan Monitor provides users with developer-friendly API and command-line interface, which has all the features of the Shodan Monitor website. Scanning Shodan’s global infrastructure helps users to scan the networks in order to confirm that an issue has been fixed. Batteries Shodan’s API plan subscription gives users access to Shodan Monitor, search engine, API, and a wide range of websites. Few users are happy about this news and excited to use it. https://twitter.com/jcsecprof/status/1110866625253855235 According to a few others, the website still needs some work as they are facing error while working with the website. https://twitter.com/MarcelBilal/status/1110796413607313408 To know more about this news, check out Shodan Monitor. Will putting limits on how much JavaScript is loaded by a website help prevent user resource abuse? Grunt makes it easy to test and optimize your website. Here’s how. [Tutorial] FBI takes down some ‘DDoS for hire’ websites just before Christmas    
Read more
  • 0
  • 0
  • 2090

article-image-microsoft-says-tech-companies-are-not-comfortable-storing-their-data-in-australia-thanks-to-the-new-anti-encryption-law
Sugandha Lahoti
28 Mar 2019
3 min read
Save for later

Microsoft says tech companies are “not comfortable” storing their data in Australia thanks to the new anti-encryption law

Sugandha Lahoti
28 Mar 2019
3 min read
Tech companies are no longer comfortable storing their data in Australia after due to their strict encryption laws, warned Microsoft president Brad Smith. Brad Smith was a speaker at the Committee for Economic Development of Australia event in Canberra which took place on Wednesday. In December, Australia passed a rushed assistance and access bill which allows Australian police and government the powers to issue technical notices. This law requires tech companies to help law enforcement agencies break into individuals’ encrypted data. Using secret warrants, the government can even compel a company to serve malware remotely to the target’s device. Microsoft has expressed concerns storing their data in Australia saying Microsoft’s operations in Australia remain unchanged, but the company is worried about the law’s “potential consequences”. Smith said that Australia had “emerged as a country where companies and governments were comfortable” with storing data, a boon to the tech sector and the economy.” “But when I travel to other countries I hear companies and governments say ‘we are no longer comfortable putting our data in Australia’, he added. “So they are asking us to build more data centers in other countries, and we’ll have to sort through those issues.” Smith said that the bill was written to protect companies from introducing a "systemic weakness" to their platforms, but this is not well defined. After a deal between the Coalition and Labor, a definition was added, but the phrasing is unclear and has left the industry unsure how to interpret it. Per Smith, “There is this wonderful phrase about enabling companies to avoid creating a systemic weakness but that phrase is not defined," he said. "Until it is defined I think people will worry and we will be among those who will worry because we do feel it is vitally important we protect our customer's privacy." “We will have to sort through those issues but if I were an Australian who wanted to advance the Australian technology economy, I would want to address that and put the minds of other like-minded governments at ease," he added. Since its induction, the assistance and access bill has been slammed by many. Those condemning include, Protonmail, a Swiss-based end-to-end email encryption company, and Australia’s email provider, FastMail, who reported that they are losing their customers following the bill. In a similar resistance yesterday, Atlassian co-founder Scott Farquhar, said that Australian the legislation is putting the Australian technology industry in a “chokehold,” creating uncertainty and putting jobs at risk. He also called on the federal government to keep its promise and revisit the bill. FastMail expresses issues with Australia’s Assistance and Access bill Australia’s Assistance and Access (A&A) bill, popularly known as the anti-encryption law, opposed by many in the tech community Australian intelligence and law enforcement agencies already issued notices under the ‘Assistance and Access’ act despite opposition from industry groups
Read more
  • 0
  • 0
  • 2064