Archive for the ‘ Tech ’ Category


New features include document scanning and improved sharing from the desktop

Dropbox just dumped a ton of new productivity features on users of its file storage and collaboration service that are all aimed at making it easier for people to get work done within its applications.

Updates to the Dropbox app for iOS allow users to scan documents directly into the cloud storage service, and get started with creating Microsoft Office files from that app as well. The company also increased the ease and security of sharing files through Dropbox, and made it easier to preview and comment on files shared through the service.

These launches mean that Dropbox will be more valuable to people as a productivity service, and not just a folder to hold files. It’s especially important as the company tries to capture the interest of business users, who have a wide variety of competing storage services they could subscribe to instead.

Starting Wednesday, users of Dropbox’s iOS app will see a big plus button that they can tap to add content to Dropbox from their phone. To help with that, Dropbox is adding support for scanning documents with the iPhone camera, and saving them as PDFs. The scanning feature lets users upload multipage documents, and gives them the ability to adjust the settings of each scan so that uploaded documents are at their most readable.

Users can also upload photos from their phone using a new photo upload workflow that will let them add individual images and also all of the pictures taken on a particular day into Dropbox. The service will use machine learning to try and recognize when documents are the subject of uploaded photos (whether through the iOS app or other means) and offer to convert and process them to scans.

Dropbox Business customers will be able to search for text inside those scanned documents, thanks to new optical character recognition functionality that the company made available for its top tier of paying customers. It builds on full-text search capabilities that Dropbox already has available for digital documents uploaded to its service.

Using the plus button, people can also start Microsoft Office documents from the Dropbox iOS app. First, users select the document type, where they want to save it, and give it a file name, all inside the Dropbox app. After that, they’ll be sent out to one of Microsoft’s mobile apps to edit the file they just created, with all the changes being saved back to Dropbox.

Company representatives wouldn’t say when users could expect Dropbox’s Android app to get the same features, but said that the company believes in making sure that its apps have feature parity across platforms.

Dropbox users now have more granular controls for sharing files directly from a PC or Mac.

People who want to use Dropbox to share files will have an easier time doing so with new updates released Wednesday. The Dropbox apps for Mac and Windows now let users access detailed file sharing settings from the Mac Finder or Windows Explorer. That means users can set granular permissions for sharing documents without having to use the Dropbox web interface.

On top of that, all users can now share single files with specific people, rather than having to provide open access to everyone with a link or giving a list of people specific access to a folder. Users of Dropbox’s free tier will also be able to share folders in read-only mode, something that was previously only available to paying customers.
comment anywhere

Dropbox users can now comment on the content inside a document or image, so the people they’re working with can know exactly what is being discussed.

After sharing files, users will now be able to comment on specific parts of a file from their Web browser. Previously, Dropbox comments weren’t able to reference a specific part of a file — now users can highlight an area and discuss it in particular.

In the future, Dropbox will also allow users to attach Dropbox comments to Office files within Microsoft’s desktop productivity apps. It’s planning an update to the Dropbox badge that will let people make live comments on a PowerPoint presentation that show up in Dropbox, without having to leave the file they’re working on.

Dropbox also gave business users a new security-focused feature. Dropbox Business administrators can now access a new audit log, which provides a record of everyone who interacted with a particular file. Those logs can be viewed through an online administrator console, but are also accessible through the company’s API.

That means companies can choose to work with partners like Splunk and Domo to monitor those audit logs and generate notifications if something weird is going on.

The news comes a week after Drew Houston, the company’s co-founder and CEO, revealed at a conference that Dropbox is operating cash flow positive. It’s a positive sign for the company, which hasn’t been much for announcing new, shipping features over the past year. These announcements may signal a sea change for the company going forward — it’ll be interesting to see what comes next.

 

Click here to view complete Q&A of 70-466 exam
Certkingdom Review

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-466 Training at certkingdom.com

Tech’s biggest Fortune 500 companies

Written by admin
June 12th, 2016

Fortune is out with its latest list of the Fortune 500 for 2016 and tech companies appear frequently throughout the rankings. While the top tech company on the list likely isn’t a surprise, it is interesting to note that only two tech company broke the Top 10 largest publicly-traded companies based on full-year revenue last year.

Apple
Apple was unable to beat out Walmart ($482 billion) and Exxon Mobile ($246 billion), but it is the highest ranked tech company. Apple moved from No. 5 to No. 3 with its revenues growing 28% year over year and profits growing more than 35% to surpass $53 billion.

AT&T
AT&T won out as the largest telecom company on the list, notching up two spots to help the tech industry secure two spots among the top 10. AT&T’s revenue increased 11% year over year and profits more than doubled to $13 billion.

Verizon
Verizon gained two spots from last year after the company’s revenue grew 4%; profits rose 85% to almost $18 billion.

Amazon.com
While Amazon is typically considered a retailer, the fact that it leads the IaaS public cloud computing market makes it one of the most important tech companies today. Amazon jumped from 29 last year to 18 this year, thanks to a 20% increase in revenue. Profits were a slim $596 million.

HP
HP fell one spot on the list from 19 to 20 after its revenues declined 7%. The company went through a tumultuous past year after splitting in half. Profits dropped 9% to $4.5 billion.

Microsoft
Microsoft climbed six spots from 31 last year, posting an 8% increase in revenue. Profits dipped 45% to $12 billion in year two of stewardship by CEO Satya Nadella.

IBM
Big Blue dropped seven spots from last year after revenue declined 12%. CEO Ginni Rometty managed a 10% uptick in profits to $13 billion, however.

Alphabet
Google’s parent company saw modest gains in both revenue (+4.9%) and profit (+15%), with profits landing at $16 billion.

Comcast
Telecom giant Comcast improved by 6 spots thanks to an 8% rise in revenue, despite a 2.6% drop in profits to $8 billion.

Intel
The world’s largest maker of seminconductors had stable revenue (a less than 1% drop), but profits dipped (-2.4%) to $11 billion.

Cisco
With CEO Chuck Robbins taking over for John Chambers – who has transitioned to executive chairman – the company jumped six spots thanks to modest revenue growth (4%) and rising profits – up more than 14% to $8.9 billion.

Ingram Micro
This IT distributor announced that it is being sold to a Chinese conglomerate this year after revenues dropped 7% and profits dipped 19% to $215 million.

Oracle
Oracle’s revenue was stagnant year-over-year, but profits dropped 9% to just under $10 billion; even with that, the company jumped four spots in the rankings.

Tech Data
This Clearwater, Fla.,-based company is a distributor of technology equipment. Its profits grew 50% to $266 million.

Qualcomm
The San Diego-based semiconductor company is going through a rough patch with revenues declining 5% and profits falling by 34% to $5.2 billion.

Other notable tech companies that were highly ranked on the list included EMC at 113 ($24 billion); Time Warner Cable at 116 ($23 billion); and Facebook at 157 ($17 billion).

Click here to view complete Q&A of 70-473 exam
Certkingdom Review

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-473 Training at certkingdom.com

The Linux version is slated to arrive next year

The next version of Microsoft’s SQL Server relational database management system is now available, and along with it comes a special offer designed specifically to woo Oracle customers.

Until the end of this month, Oracle users can migrate their databases to SQL Server 2016 and receive the necessary licenses for free with a subscription to Microsoft’s Software Assurance maintenance program.

Microsoft announced the June 1 release date for SQL Server 2016 early last month. Among the more notable enhancements it brings are updateable, in-memory column stores and advanced analytics. As a result, applications can now deploy sophisticated analytics and machine learning models within the database at performance levels as much as 100 times faster than what they’d be outside it, Microsoft said.

The software’s new Always Encrypted feature helps protect data at rest and in memory, while Stretch Database aims to reduce storage costs while keeping data available for querying in Microsoft’s Azure cloud. A new Polybase tool allows you to run queries on external data in Hadoop or Azure blob storage.

Also included are JSON support, “significantly faster” geospatial query support, a feature called Temporal Tables for “traveling back in time” and a Query Store for ensuring performance consistency.

SQL Server 2016 features were first released in Microsoft Azure and stress-tested through more than 1.7 million Azure SQL DB databases. The software comes in Enterprise and Standard editions along with free Developer and Express versions.

Support for SQL Server 2005 ended in April.

Though Wednesday’s announcement didn’t mention it, Microsoft previously said it’s planning to bring SQL Server to Linux. That version is now due to be released in the middle of next year, Microsoft said.

Click here to view complete Q&A of 70-333 exam
Certkingdom Review

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-333 Training at certkingdom.com

CIOs and other IT professionals need to strategically manage the use of today’s popular consumer messaging apps in the enterprise. While that process can be a challenge, it’s possible to protect your business without blocking all rogue IT.

Today’s mobile device owners commonly use messaging apps to send selfies, command bots to order takeout and collaborate with their coworkers — sometimes simultaneously, and oftentimes via the same app. Nothing in particular precludes messaging apps such as WhatsApp, Facebook’s Messenger, Skype or Telegram from being used for work, play and everything in between. However, these consumer-focused apps are becoming the de facto software for corporate communication, and IT professionals have good reason for concern.

As the distinction between enterprise and consumer messaging apps blurs, IT’s needs and responsibilities are increasingly at odds with those of the workers it supports. Such a disparity can hinder workplace productivity and effective IT management.

“Employees might choose to use a consumer tool to get their jobs done when they don’t have access to something their company offers, or if the corporate tool is too cumbersome,” says Chris Voce, a vice president and research director with Forrester Research. “The primary job of an enterprise IT pro responsible for workforce computing should be to help make workers more productive. If they don’t offer a tool that employees need to get their jobs done, they’re likely going to drive underground use.”
Consumer apps offer genuine business value, but …

Adam Preset, a research director with Gartner, says workers already use consumer messaging apps extensively in the enterprise because the popular tools are often effective and easy to use. “Our mobile devices suit our needs whether we’re on the clock or off,” he says. “It’s more natural for apps that handle messaging, which is ubiquitous, to serve personal and professional needs.”

Enterprises should examine consumer messaging apps, and take stands on acceptable use of such apps for non-critical, non-confidential communication, both internally and externally, according to Preset. “Closing off completely without understanding will just drive legitimate uses underground.”

Messaging apps including Line and WhatsApp are commonly used in enterprise, but that doesn’t mean all consumer apps are well-suited for business use, according to Raul Castanon-Martinez, a senior analyst at 451 Research. “Consumer apps will have an advantage given that users might already be familiar with the [user interface] but otherwise will be in the same position as other enterprise messaging apps,” he says. “I don’t believe consumer apps transitioning into the enterprise have a significant advantage over enterprise apps like Slack or HipChat.”
MORE: 10 mobile startups to watch

Corporate workers can use a tool such as Slack to interact with colleagues and business applications just as easily as they can transition from using Facebook Messenger for talking to friends to using it for work, Castanon says. “The issue is not which apps employees can use, but rather what can they do with these apps?” he says. “Banning consumer apps only makes sense when organizations have not implemented comprehensive security policies.”

When IT professionals properly secure their companies’ assets, they don’t have to worry about the apps employees use, or for what, Castanon says. CIOs who instead ban individual apps fight an “uphill battle, because employees will always find a way to circumvent restrictions,” he says. “If IT is spending too much time monitoring the use of consumer messaging apps it could indicate they’re probably not doing other things that will have a bigger impact for securing company assets.”

Preset suggests that enterprises adopt a tiered approach to messaging apps. For example, consumer apps can handle simple, everyday tasks such as coordinating meetings or connecting with colleagues, he says. However, “[w]hen you’re communicating about your enterprise’s intellectual property or customer data, you need an enterprise answer.”
IT approach to messaging apps should be strict and strategic

Many enterprise messaging apps are specifically designed to protect core business interests. The most common management features in such apps include administrative controls, integration with data services, audit, archive and encryption tools, and security-policy enforcement.

[Related: Why Facebook should buy Slack to win the enterprise]

Some businesses also demand service level agreements and timely, guaranteed support from their messaging-software vendors, according to Preset. “The terms and conditions of many of these [consumer] messaging apps do not at all reference or favor the enterprise,” Preset says. “With these apps, even if the worker sent the message as part of her job, her employer doesn’t own the data.”

The unchecked use of consumer apps in the enterprise can also create “a huge security hole” that threatens corporate regulatory compliance, according to Anurag Lal, president and CEO of Infinite Convergence Solutions, an enterprise messaging vendor. However, businesses shouldn’t restrict the use of any specific app unless they also provide viable alternatives, Lal says.

Ultimately, the equation isn’t complex: If employees’ only corporate communications options are a clunky email interface and an easy-to-use consumer-centric messaging app, the choice is a simple one.

Click here to view complete Q&A of 70-466 exam
Certkingdom Review

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-466 Training at certkingdom.com

No IoT without IPv6

Written by admin
May 22nd, 2016

Does your company foresee making big bucks from the Internet of Things? It won’t be happening without widespread adoption of IPv6 first.

Do you think the Internet of Things (IoT) will be the Next Big Thing? It can’t be. Not until we get past the real Next Big Thing: IPv6.

Without the extensive global adoption and successful deployment of IPv6 as the primary version of the Internet Protocol, the IoT won’t be possible. In fact, the future of the Internet itself is at stake. Here are the five reasons why:

1. The IoT will need more IP addresses than IPv4 can provide.
According to Gartner’s estimate, by 2020 there will be more than 26 billion IoT devices connected to the Internet. Cisco is thinking even bigger; it has projected that there will be more than 50 billion devices connected to the Internet by 2020.

Unfortunately, IPv4 is still widely used, and IPv4 has only 4.3 billion possible IP addresses. Now, it’s true that that not every IoT device will need an IP address, but IPv4 can accommodate less than 20% of the devices that Gartner projects for a mere four years from now. Worse, most IPv4 addresses have already been depleted, with the one minor exception worldwide being in Africa. And even Africa’s allocation is projected to be depleted by March 31, 2018.

How much of a difference would IPv6 make? A lot. It has a total of 340 undecillion (that is 340 trillion trillion trillion) addresses. Even with the IoT fulfilling Cisco’s expectations, that should be enough for years to come.

But IPv6 adoption is weak. As of May 14, worldwide IPv6 traffic reaching Google totaled about 11.6%. The adoption rate for the U.S. federal government was about 62% for public-facing websites as of May 16. The good news is that adoption is increasing. Global IPv6 traffic accessing Google was less than 3% in January 2014 ,and only about 35% of the public-facing websites of U.S. federal agencies were using IPv6 back then.

2. Cloud computing also needs more IP addresses than IPv4 can provide.
When Microsoft chose to use IPv4 for the data centers that would support its cloud computing initiative, it had to chase globally after the extremely limited IPv4 addresses available and paid a very high price for them.

Supplies on the secondhand IPv4 exchange market are getting thin, so the price for IPv4 addresses will be going even higher — by some estimates, up to $100 per IPv4 address in the near future. Guess who will ultimately pay for such exorbitant prices. The customers, of course.

3. Adopting an IPv6-only policy can dramatically reduce cybersecurity threats.
This is simple: The moment we turn off IPv4, we will eliminate global cyberattacks and security threats based on the IPv4 stack. It may be that we have lost the battle against the bad actors in the IPv4 stack. But, we may still have a fighting chance to win the war in the IPv6 stack. This may be our best chance to gain the upper hand.

4. IPv4 is only a beta version of the Internet.
According to Vint Cerf, one of the fathers of the Internet and co-inventor of the TCP/IP protocol suite, IPv4 is only “the experimental version of the Internet.” But we have been using this beta version of his Internet protocol since 1983. As Cerf stated, IPv6 is the actual production version of the Internet for the 21st century.

Why have we been using a beta version in our production environment for so long?

5. Adopting IPv6 is a matter of leadership, vision and competitive edge.
Service providers and product manufacturers keep saying that there is no demand for IPv6 from their customers. But it is nonsense to wait for them. The majority of consumers do not know which version of IP is running in their electronic devices, and they don’t care.

What really matters is whether a company’s leadership has the vision to ensure that it retains a competitive edge for its products and services and is situated to thrive in a new era of rapid technological innovations based on IPv6.Companies that say there is no immediate money to be made by transitioning to IPv6 need to ask themselves whether they intend to make money from the IoT. One estimate, from Business Insider, is that the IoT represents at least a $6 trillion opportunity. But the IoT won’t be happening without IPv6.

Click here to view complete Q&A of 70-414 exam
Certkingdom Review

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-414 Training at certkingdom.com

 

Tech companies snag 20 spots on Glassdoor’s ranking of 25 highest paying companies in America

Tech companies dominate Glassdoor’s ranking of the highest paying companies in the U.S., snagging 20 of the top 25 spots. But no tech company ranks higher than Juniper Networks, which pays its workers a median total compensation of $157,000.

The next-highest ranking tech company is Google, which landed at No. 5 on Glassdoor’s list with a median total compensation of $153,750.

While tech companies earned the most spots on the list, consulting firms set the high bar for compensation in Glassdoor’s report, “25 Highest Paying Companies in America for 2016.” No. 1 on the list is A.T. Kearney, which pays a median total compensation of $167,534. Strategy&, at No. 2 on the list, pays a median total compensation of $160,000.

Juniper placed third among the 25 companies, while McKinsey & Company ranked fourth with a median total compensation of $155,000.

Glassdoor’s total compensation figures include base salary as well as other forms of pay, such as commissions, tips and bonuses. The data comes from U.S.-based employees who voluntarily shared their compensation on Glassdoor’s website during the past year. Companies considered for Glassdoor’s report must have received at least 50 salary reports by U.S-based employees during the 12-month time frame.

“Salaries are sky-high at consulting companies due to ‘barriers of entry’ in this field, which refers to employers wanting top consultants to have personal contacts, reputations and specialized skills and knowledge,” said Andrew Chamberlain, Glassdoor chief economist, in a statement. “In technology, we continue to see unprecedented salaries as the war for talent is still very active, largely due to the ongoing shortage of highly skilled workers needed.”

Here is Glassdoor’s full list of the 25 highest paying companies in the U.S.:

1. A.T. Kearney: median total compensation $167,534; median base salary $143,620
2. Strategy&: median total compensation $160,000; median base salary $147,000
3. Juniper Networks: median total compensation $157,000; median base salary $135,000
4. McKinsey & Company: median total compensation $155,000; median base salary $135,000
5. Google: median total compensation $153,750; median base salary $123,331
6. VMware: median total compensation $152,133; median base salary $130,000
7. Amazon Lab126: median total compensation $150,100; median base salary $138,700
8. Boston Consulting Group: median total compensation $150,020; median base salary $147,000
9. Guidewire: median total compensation $150,020; median base salary $135,000
10. Cadence Design Systems: median total compensation $150,010; median base salary $140,000
11. Visa: median total compensation $150,000; median base salary $130,000
12. Facebook: median total compensation $150,000; median base salary $127,406
13. Twitter: median total compensation $150,000; median base salary $133,000
14. Box: median total compensation$150,000 ; median base salary $130,000
15. Walmart eCommerce: median total compensation $149,000; median base salary$126,000
16. SAP: median total compensation $148,431; median base salary $120,000
17. Synopsys: median total compensation $148,000; median base salary $130,000
18. Altera: median total compensation $147,000; median base salary $134,000
19. LinkedIn: median total compensation $145,000; median base salary $120,000
20. Cloudera: median total compensation $145,000; median base salary $129,500
21. Salesforce: median total compensation $143,750; median base salary $120,000
22. Microsoft: median total compensation $141,000; median base salary $125,000
23. F5 Networks: median total compensation $140,200; median base salary $120,500
24. Adobe: median total compensation $140,000; median base salary $125,000
25. Broadcom: median total compensation $140,000; median base salary $130,000

Click here to view complete Q&A of 70-398 exam
Certkingdom Review

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-398 Training at certkingdom.com

 

Data science is one of the fastest growing careers today and there aren’t enough employees to meet the demand. As a result, boot camps are cropping up to help get workers up to speed quickly on the latest data skills.

Data Scientist is the best job in America, according to data from Glassdoor, which found that the role has a significant amount of job openings and that data scientists earn an average salary of more than $116,000. According to its data, the job of data scientist rated a 4.1 out of 5 for career opportunity and it earned a 4.7 for job satisfaction. But, as the role of data scientist grows in demand, traditional schools aren’t churning out qualified candidates fast enough to fill the open positions. There’s also no clear path for those who have been in the tech industry for years and want to take advantage lucrative job opportunity. Enter the boot camp, a trend that has quickly grown in popularity as a way to train workers for in-demand tech skills. Here are 10 data science boot camps designed to help you brush up on your data skills, with courses for anyone from beginners to experienced data scientists.

Bit Bootcamp

Located in New Jersey, Bit Bootcamp offers both part-time and full-time courses in data analytics that last four weeks. It has a rolling start date and courses cost between $1,500 – $6,500, according to data from Course Report. It’s a great option for students who already have a background in SQL, as well as object-oriented programming skills such as Java, C# or C++. Attendees can expect to work on real problems they might face in the workplace, whether it’s at a startup or a large corporation. The course completes with a Hadoop certification exam using the skills learned over the past four weeks.
Price: $1500 – $6500

NYC Data Science Academy
The NYC Data Science Academy offers 12-week courses in data science that offer a combination of “intensive lectures and real world project work,” according to Course Report. It’s aimed at more experienced data scientists, who have a masters or Ph.D. degree. Courses include training in R, Python, Hadoop, Github and SQL with a focus on real-world application. Participants will walk away with a portfolio of five projects to show to potential employers as well as a Capstone Project that spans the last two weeks of the course. The NYC Data Science Academy also helps students garner interest from recruiters and hiring managers through partnerships with businesses. In the last week of the course, students will participate in mock interviews and job search prep; many will also have the opportunity to interview with hiring tech companies in the New York and Tri-State area.
Price: $16,000

The Data Incubator
The Data Incubator is another program aimed at more experienced tech workers who have a masters or Ph.D., but it’s unique in that it offers fellowships, which means students who qualify can attend for free. Fellowships, which must be completed in person, are available in New York City, Washington D.C. and the Bay Area. The program also offers students mentorship directly from hiring companies, including LinkedIn, Microsoft and The New York Times, all while they work on building a portfolio to showcase their skills. The boot camp programs run for eight weeks and students need to have a background in engineering and science skills. Attendees can expect to leave this program with data skills that will be applicable in real world companies.
Price: Free for those accepted

Galvanize
Galvanize has six campuses located in Seattle; San Francisco, Denver, Fort Collins, Boulder, Colo.; Austin, Texas; and London. The focus of Galvanize is to develop entrepreneurs through a diverse community of students who include the likes of programmers, data scientists and Web developers. Galvanize boasts a 94 percent placement rate for its data science program since 2014 and students can apply for partial scholarships of up to $10,500. According to Galvanize, students have gone on to work for companies such as Twitter, Facebook, Air BnB, Tesla and Accenture. This boot camp is intended to combine real life skills with education so that graduates walk away ready to start a new career or advance at their current company through formal courses, workshops and events.
Price: $16,000

The Data Science Dojo
With campuses in Seattle, Silicon Valley, Barcelona, Toronto, Washington and Paris, the Data Science Dojo brings quick and affordable data science education to professionals around the world. It’s one of the shortest programs on this list — lasting only five days — and it covers data science and data engineering. Before you even attend the program, you will get access to online courses and tutorials to learn the basics of data science. Then, you’ll start the in-person program which consists of 10 hour days over the course of five days. Finally, after the boot camp is complete, you’ll be invited to exclusive events, tutorials and networking groups that will help you continue your education. Due to the short nature of the course, it’s tailored to those already in the industry who want to learn more about data science or brush up on the latest skills. However, unlike some of the other courses on this list, you don’t need a master’s degree Ph.D. to enroll, it’s aimed at anyone at any skill level who simply wants to throw themselves in the trenches of data science and become part of a global network of companies and students who have attended the same program.
Price: Free for those accepted

Metis
Metis has campuses in New York and San Francisco, where students can attend intensive in-person data science workshops. Programs take 12 weeks to complete and include on-site instruction, career coaching and job placement support to help students make the best of their newly acquired skills. Similar to other boot camps, Metis’ programs are project-based and focus on real-world skills that graduates can take with them to a career in data science. Those who complete the program can expect to walk away with in-depth knowledge of modern big data tools, access to an extensive network of professionals in the industry and ongoing career support.
Price: $14,000

Data Science for Social Good
This Chicago-based boot camp has specific goals; it focuses on churning out data scientists who want to work in fields such as education, health and energy to help make a difference in the world. Data Science for Social Good offers a three-month long fellowship program offered through the University of Chicago, and it allows students to work closely with both professors and professionals in the industry. Attendees are put into small teams alongside full-time mentors who help them through the course of the fellowship to develop projects and solve problems facing specific industries. The program lasts 14 weeks and students complete 12 projects in partnership with nonprofits and government agencies to help tackle problems currently facing those industries.
Price: Free for those accepted

Level
Offered through Northeastern University, Level is a two-month program that aims to turn you into a hirable data analyst. Each day of the course focuses on a real-world problem that a business will face and students develop projects to solve these issues. Students can expect to learn more about SQL, R, Excel, Tableau and PowerPoint and walk away with experience in preparing data, regression analysis, business intelligence, visualization and storytelling. You can choose between a full-time eight week course that meets five days a week, eight hours a day and a hybrid 20-week program that meets online and in-person one night a week.
Price: $7,995

Microsoft Research Data Science Summer School
The Microsoft Research Data Science Summer School — or DS3 — runs for eight weeks during the summer. It’s an intensive program that is intended for upper level undergraduates or graduating seniors to help grow diversity in the data science industry. Attendees get a $5,000 stipend as well as a laptop that they keep at the end of the program. Classes accommodate only eight people, however, so the process is selective, but it’s only open to students who already reside or can make their own accommodations in the New York City area.
Price: Free for those accepted

Silicon Valley Data Academy
The Silicon Valley Data Academy, or SVDA, hosts eight-week training programs in enterprise-level data science skills. Those who already have an extensive background in data science or engineering can apply to be a fellow and have the tuition waived. You can expect to learn more about data visualization, data mining, statistics, machine learning, natural language processing as well as tools such as Hadoop, Spark, Hive, Kafka and NoSQL. Programs consist of more traditional curriculums including homework, but it also includes guest lectures, field trips to headquarters of collaborating companies and projects that offer real world experience.
Price: Free for those accepted

 

Click here to view complete Q&A of 70-414 exam
Certkingdom Review

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-414 Training at certkingdom.com

Q&A: Mobile app security should not be an afterthought

Written by admin
February 13th, 2016

As enterprises struggle to keep up with their internal demand for mobile apps, more are turning to rapid development workflows. What does this mean for security?

As enterprises struggle to keep up with their internal demand for mobile apps, more are turning to more speedy development workflows, such as the Minimum Viable Product (MVP) , which essentially calls for mobile development teams to focus on the highest return on effort when compared to risk when choosing apps to develop, and features to build within them. That is: focus on apps and capabilities that users are actually going to use and skip those apps and features they won’t.

Sounds simple, but what does that mean when it comes to security? We know application security is one of the most important aspects of data security, but if software teams are moving more quickly than ever to push apps out, security and quality assurance needs to be along for the process.
MORE ON CSO:Mobile Security Survival Guide

The flip side is minimum apps and features could mean less attack surface. To get some answers on the state of mobile app security and securing the MVP, we reached out to Isaac Potoczny-Jones research lead, computer security with a computer security research and development firm Galois.

Potoczny-Jones has been a project lead with Galois since 2004, is an active open source developer in cryptography and programming languages. Isaac has led many successful security and identity management projects for government organizations including (Navy, DOD), (DHS), federated identity for the Open Science Grid (DOE), and mobile password-free authentication (DARPA), and authentication for anti- forgery in hardware devices (DARPA).
ted talk
Four mindblowing Ted Talks for techies

TED talks make that possible to do in a single sitting. Here are four talks that in just over an hour
Read Now

Please tell us a little about Galois and your role there in security.

Galois is a computer security research and development firm out here in Portland, Ore. We do a lot of work with the US federal government, been around since 1999 and I’ve been here for 11 years now. I think a lot about this topic, I really appreciate and employ myself the lean methodologies for product development, and I love the lean startup approach. I also do security analysis for companies, so I’ve gone into a number of start-ups too and looked at their security profile for their products or their infrastructure, and help them to develop a security program. I’ve definitely seen both sides of the issue as far as where MVP thinking leads you.

What are you seeing out within organizations today when it comes to mobile security?

There’s definitely a lot more development in mobile happening. The best practices in mobile aren’t as well developed as best practices for the web. That’s getting a little bit better.Consider HTTPS. What we saw for quite some time was something that on the Web is relatively straightforward, which HTTPS is. People were doing it wrong on mobile for years before anyone really noticed. There’s a lot you can get wrong with HTTPS, and they were getting it all wrong. As people move over to mobile they are definitely having to relearn some of the lessons we learned over the years.
“A lot of things are easy to add at the end with security, but sometimes you run into systems that are just kind of broken from the foundation. As with any of these things, the later you catch it, the costlier it’s going to be be.”

Password security is another one of those. People began to make passwords on websites a lot more robust. You can’t just have a four or five letter password anymore on most websites. But because mobile devices are so difficult to type password into, a lot of sites have relaxed those password rules. In reality, the threat is just the same as it always has been.

What impact do you see the minimum viable product, or minimum viable app, trend?

On the MVP front, there’s a very fascinating challenge with security because security is a non-functional requirement. I tend to like the lean scrum methodology. I don’t know if you’re familiar with that one, but I can use that one as an example. They’re all kind of similar in some ways. They emphasize features, they emphasize things the users can see. They emphasize testing out ideas, and getting them into the market. Testing them, gathering metrics about how effective they are, and using that as feedback into the product. That’s a really good idea about how to develop a product. But because even just the terminology, minimum viable product, it is really emphasizing minimizing.

It emphasizes getting rid of what you don’t need. Those things together, minimizing things and really having an emphasis on what the user can do and see, that makes it so that non-functional requirements are kind of an afterthought. You have to squint to figure out how to apply non-functional requirements like security to a lot of these processes like scrum.

I would imagine with an MVP teams want to move the app out as quickly as possible, so they don’t want to spend a lot of time threat modeling and going through a lot of additional process, because that’s all adding to more development time. So there seems to be a natural friction between the goals of MVP and good security.

It’s absolutely a friction. It’s challenging because securing is mostly invisible. That means good security and bad security look exactly the same, until something goes wrong. Security is really visible when something is broken or somebody gets hacked and then you make the news. Then it kind of blows up in your face. We’ve seen this a few times, I don’t know how many start-ups it’s killed, it’s probably killed a few, but it’s definitely cost a lot of start ups when their first major news coverage is that they were hacked.

What are some ways organizations can ease that tension when it exists? Is there a way to bring security in so it’s not too obtrusive? Is there a way to separate out apps by data type? And possibly greenlight MVP apps that don’t touch more sensitive data, and give a closer look at those apps that do?

I think that’s a good approach. As you point out, one way is to say, let’s see if we can do an MVP with data that’s not as sensitive so you won’t have to focus as strongly on security. Nowadays, it’s a little more challenging. Even the minimum things you do you will need security. It kind of doesn’t matter what your data is, you will get targeted, you will get attacked, and even if it’s just with these automated bots that run around the Internet attacking everything. They’ll use your infrastructure for sending spam at the very least, if that’s all they can do. To me, the approach is you have to implement some of the industry best practices as far as the OWASP Top 10. You have to believe that security is an important part of a minimum viable product to start to even begin to get these user stories in there.

What I like to tell people is think about user stories, even negative user stories or things like that are, as a user, I don’t want to see my personal information leaked on the internet because I’ve shared something sensitive in your app or your website, I’ve stored something sensitive in your website. I don’t want to see that in the hands of people who will use my private information against me.

That sounds like something a security team could put a guide together, or put in place a checkpoint on whether an app can go through. For instance, if the app has certain conditions that are true, or one of these conditions that are true, the app has to go through a security review. If not, it’s OK for a security light approach within certain guidelines.

That’d be perfect. Typically these lean approaches have at least some kind of testing methodology built in, or acceptance testing. Or, as some of them call say, “What’s your definition of ‘done’?” The first step is just saying, “We’re going to include security in these definitions of done,” and once you’ve at least penetrated that level, which I don’t think a lot of people have, but once they get that, then they’re going to at least do the right things. You’re either going to start to build it either into the user stories or the acceptance testing.

But you can’t leave it to just be at the end of the process. If you leave security acceptance testing toward the end, and naturally your schedule is going to slip. Then you’ll get to the security testing and find there’s a lot more work to do. Then you’ll be in this unfortunate decision of either having to fix things and let your schedule slip, or choose to let something go out the door that’s not secure.

The real tragedy is when a system is kind of inherently insecure, it was built in a really insecure way that requires major rework, because you didn’t think about security at the beginning. A lot of things are easy to add at the end with security, but sometimes you run into systems that are just kind of broken from the foundation. As with any of these things, the later you catch it, the costlier it’s going to be be.

If you’re looking at your to-do list, whatever that to-do list is, whether it’s a list of stories or a big list of tasks and action items, you should be recognizing some security issues in there, as you go. You’ll get to a point, you’re developing something and one of your developers hopefully will say, “Well, look, our system is vulnerable to whatever cross site request forgery, cross site scripting attack. Which any system that’s not designed to protect against it, is going to be.

If you look at your bug list, you should see that pop up there at some point. Some of these security issues will come up during development, because nothing will be perfect. That’ll be an early indicator.

If you don’t have anything, if you look at your bug list and you don’t see anything, if your developers aren’t actively talking about security or saying, “We’re going to have to add some tasks for security,” you’re going to say, “Well, I want to add that feature for you but that’s going to have an impact on security.” If you’re not hearing it as part of the conversation, then there’s going to be a problem.

Click here to view complete Q&A of MB2-702 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MB2-702 Training at certkingdom.com

IT spending tanked worldwide last year

Written by admin
January 19th, 2016

But the U.S. bucked the trend, as spending rose

Worldwide IT spending fell nearly 6% last year — the largest one-year decrease research firm Gartner says it has ever seen. The global forecast for 2016 is for an improving, but relatively flat, $3.54 trillion. That would be a 0.6% increase.

Gartner blames a strong U.S. dollar for the global decline, because it effectively increased the price of exports by as much as 20%. Political and economic instability in countries such as Russia and Brazil also contributed to the spending problems. By comparison, the U.S. saw an increase in IT spending.

In the U.S., IT spending increased 3.1% to $1.14 trillion. The U.S. forecast this year is for a 1.2% increase.

Globally, “we’re just in this anemic growth period,” John-David Lovelock, a research vice president at Gartner. The countries who saw the most problems with IT spending include Russia, Japan and Brazil.

The economic issues also changed how firms bought IT products and services, said Lovelock.

Instead of buying a product license for $1 million, for instance, users are switching to SaaS products for $100,000 a year. Cloud services have also replaced physical servers, he said.

Globally, there were declines in every area of IT spending, including software, devices and services. The only area to post growth was data center systems spending, largely thanks to cloud.

The IT area expected to see the largest gains this year is software; it is expected to rise 5.3% to $326 billion globally. CRM is the hot area, as users seek to integrate social media with the business needs.

 

Click here to view complete Q&A of MB2-706 exam

Certkingdom 20% Discount Promotion Coupon Code: 45K2D47FW4

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MB2-706 Training at certkingdom.com

New job realiities ahead for IT workers

Written by admin
January 16th, 2016

Next time, an economic downturn may be different for tech

The change in IT hiring was illustrated this week by General Electric Co., which announced it is moving its headquarters from Fairfield County, Conn., to Boston. In doing so, Jeff Immelt, GE’s CEO, said Greater Boston is home to 55 colleges and universities, and “attracts a diverse, technologically fluent workforce.”

Four months prior, GE announced formation of a new business, GE Digital, a $6 billion unit with a goal of becoming “a top 10 software company by 2020,” said Immelt at the announcement. To help staff up for this initiative, GE is hiring technology workers capable of new product development.

This isn’t happening just at GE. IT employment is broadly shifting away from infrastructure support, which is increasingly vulnerable to offshore outsourcing and migration to cloud services.

“GE is basically reinventing itself and trying to become the leading industrial software company in the world,” said Erik Dorr, vice president of research at management consulting firm Hackett Group.

For GE this means building platforms to support new technologies, such as Internet of Things-enabled products. “They recognize that all of this is predicated on having access to top talent,” said Dorr.

IT employment has, in the past, followed the economy. The Great Recession resulted in massive IT job layoffs, as companies cut back-office operations. But today’s shift to “digitization” of products — turning consumer wares into connected products, adapting to mobile and utilizing business intelligence, robotics and social media — have all increased demand for people with these skills.

This means that if the global stock sell-off and crashing oil prices result in new waves of layoffs, tech workers who develop new products, markets and digital experiences may be in the best position to survive.

Firms “are going to hire these people no matter what happens to the economy,” said David Foote, the CEO of Foote Associates, which researches the IT labor market. “If there is a downturn, they work even harder to keep the people they’ve got,” he said.

Technology jobs are now embedded throughout organizations, and many CIOs may not have the control over technology spending they once did. But they still are responsible for a sizeable part of IT spending.

Estimates of the number of new IT jobs added last year range from 125,000 to about 180,000, similar to what happened in 2014. This is based on an analysis of government labor data by labor market analysts.

In 2016, IT budgets “are still growing, but only at 2% at the median,” said Frank Scavo, the president of Computer Economics, a research firm. That’s down from 3% IT budget growth in 2015.

“We do not see layoffs on the horizon,” said Scavo, whose firm runs ongoing surveys of IT managers. “It’s not a hiring boom by any means, but tech staffing is still healthy,” he said. Only 7% of IT executives expect to see staff cuts in 2016, while 40% plan to hire more staff members, said Scavo.

But Victor Janulaitis, the CEO of Janco Associates, said IT hiring, which slowed in the last few months of last year, will be impacted by the financial market turmoil. “I think we’re seeing the first phase of a new downturn in the economy,” he said. He expects IT hiring to be flat this year.

For his part, Mark Roberts, the CEO of TechServe Alliance, which also tracks IT hiring, doesn’t see the recent softening in IT hiring as a sign of impending economic decline.

“IT employment has been growing at a very steady clip and still outperforms the overall workforce,” said Roberts. “At some point, the significantly elevated rate of growth is not sustainable,” he said.

There’s another factor that may have had a role in GE’s move to Boston: GE has been angry over Connecticut’s rising tax rates, creating a political storm.

The tax climate is more favorable in Massachusetts than Connecticut, says the Tax Foundation, an independent tax policy research organization. Massachusetts is ranked 25th nationally, versus Connecticut, near the bottom of tax favorability at 44. But the tax climate is even worse in California, which is ranked at 48, and that’s the state with the nation’s highest concentration of technology jobs.

Certkingdom 20% Discount Promotion Coupon Code: 45K2D47FW4

Click here to view complete Q&A of 70-246 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-246 Training at certkingdom.com

 

Inside AT&T’s grand dynamic network plan

Written by admin
January 7th, 2016

The service provider shares lessons learned from early adopters of its first Network on Demand service and outlines what comes next

AT&T is pouring billions into its network to make it more dynamic, which is resulting in new capabilities for enterprise customers. Network World Editor in Chief John Dix recently stopped by AT&T headquarters in Dallas to talk to Josh Goodell, VP of Network on Demand, about what the company is learning from early adopters of its Switched Ethernet on Demand service and what comes next. Among other things, Goodell explains how provisioning now takes days vs. weeks, service profiles can be changed in seconds, and how he expects large shops to use APIs to connect their network management systems directly to AT&T controls. Oh, and a slew of virtual functions are on the horizon that will enable you to ditch all those appliances you’ve been accumulating.

Let’s start with the big picture view of AT&T’s dynamic network efforts. What’s the goal?
Usually when I talk about our strategy I start at the network access layer. This is the physical infrastructure that AT&T has built over years – the fiber network and technologies like LTE on the wireless side, and what we call Lightspeed, which is a combination of fiber and copper. It’s a very robust network that has a tremendous reach and tremendous speed. All of that is foundational to what we’re doing now. Our Network on Demand platform acts like rapid onramps to that very fast network. So that physical layer is important and one area of the overall puzzle.

Another area is driven by John Donovan, Senior Executive Vice President of Technology and Operations, who is driving our software-centric architecture. We’ve called it different things over the last couple of years, including Domain 2.0, but at its core it’s about driving virtualization within our own network. He’s made the commitment that by 2020 we’re going to virtualize 75% percent of our network. That’s all about driving up utilization in the network and enabling scale and flexibility.

Where does SDN make sense? We ask Fidelity’s Director of Global Network Architecture

The third piece is enabling these same types of capabilities for our business customers, and that’s really where Network on Demand comes into play. It’s taking technologies that we’re utilizing internally and making our core strategic services better by utilizing the same technologies.

The Network on Demand platform initially launched with one capability — AT&T Switched Ethernet on Demand — and the second service that will launch is Managed Internet Service on Demand. Then we will continue to add additional services over time. So Network on Demand is creating this platform that enables customers to have a rapid onramp to that very robust network.

That gives customers more control of their network, the ability to rapidly scale up or scale down their network, and improves TCO, not just because you have the ability to use exactly what you want, but also because you can be more productive. You can spin up a location more rapidly than you could have in the past.

Then we will also start getting into services that take advantage of both SDN and NFV, where you’re actually virtualizing what has typically been purpose-built appliances. We don’t have a product in the market yet but we’ve announced that the first iteration will be available in the next few months.

We’ve reorganized our entire technology organization around network simplification and a software-centered network, and then exposing those capabilities to our customers. That’s the big picture and Network on Demand is one piece of that picture.

It’s important to understand that we have a lot of conviction across all three of those areas. From 2009 to 2014 we spent about $140 billion in those three areas. These aren’t hobbies. These are how we’re committed to drive a differentiated network experience.

How long has the switched Ethernet service been available?
We opened the first market in Austin Texas in November 2014, expanded to five markets in February of this year, and in April expanded to 170+ markets. So that is very, very fast for any service we’ve ever stood up. Part of it is the technology. It’s different. It’s building on a software layer that allows for a rapid product instantiation, but we also used an agile approach to development, a DevOps model, and the combination of those things allowed us to move at a rapid pace.

When does the Internet on Demand service go live?

Managed Internet on Demand is CI in Atlanta. Interestingly, with AT&T Switched Ethernet on Demand there’s no virtualization happening. It is an SDN layer on top of existing network infrastructure. Managed Internet on Demand is a different architecture that actually takes advantage of both SDN and NFV, so we’ll be virtualizing the customer edge. Typically the customer has a router on-premise that will be virtualized in the AT&T cloud, and then we will also be virtualizing the provider edge. It’s a big deal because I expect that over time we’ll be virtualizing a lot of different services, so we’re going through what it takes to do this with this next service for the first time ever.

The initial offering will be a virtualized router only. If they want they can buy a switch and put it at the end of their network and run something off of that. Eventually we’ll have a use case where we’ll actually deploy a piece of CPE on-premise they can use, but the initial use case is a pure virtualized router.

The first place to tackle SDN? In the WAN

Coming back to the Ethernet On Demand service, how many customers do you have?
It’s been really interesting to see the way that has played out. As of today it’s over 350 customer networks, about 1,000 locations. That’s a lot more than what we had expected. Market demand has been pretty strong.

Is there a typical customer profile emerging?
It has been across industries. The largest network we have provisioned is about 150 locations and as small as two-locations. So it’s run the gamut. It is more prevalent so far in the mid-market and down-market, but every single one of our segments has seen traction.

One interesting thing is how we’ve simplified the selling experience. We’ve enabled our sales people to use an iPad to order the service and do all of the contract work with the customer on their premise. Historically the presale cycle alone was days. Now everything can be done in one sit-down discussion. That’s not the cool, interesting technology that SDN represents, but it is an interesting case of how, when you take friction out of the experience on both the seller’s side and the customer’s side, you’re going to see traction and we’ve seen it with Ethernet.

How long does it take to deliver Switched Ethernet On Demand?
When fiber is available to the customer building, and if you take out the “Customer not ready” situations, it’s five days. The equivalent when you’re in a fiber location but not on Network on Demand is probably closer to four weeks. When you don’t have fiber availability the cycle time obviously goes up because you have the build process, but we’re still automating the overall process with Network on Demand.

How much are early customers actually changing the Switched Ethernet service profile once they are online?
There are a couple of things customers can do, one of which is to add locations. Customers can go into the portal, which knows the inventory of their locations, and go from, say, a two-location network to a three-location or a four-location network in a matter of days without ever having to talk to anyone. That’s a big deal. Historically that would have been multiple phone calls and a fairly long provisioning cycle. Customers can now do it themselves at their own convenience.

And another common use case is the ability for a customer to scale up or scale down their network. Any business that has seasonality is going to be interested in that use case. For example, we have K-12 as well as college institutions that are very interested in that capability.

We’ve also talked to a few hospitals that have branch locations that do analysis on medical images that are interested in the ability to scale up the network to send large payloads and then scale it down again, which isn’t something they could do before.

One of our large customers uses Network on Demand service for redundancy between data centers. They have this as a secondary network and keep it scaled all the way down, and in the event they have an issue they can scale it up within seconds.

Another interesting use case is around rapid provisioning for M&A activity. After a merger or acquisition the network may go from 10 locations to 20 locations literally within a week, and provisioning agility is important for those types of situations.

Does it surprise you that the mid- to smaller-size shops are the early adopters? I would think the largest shops would be dying for these capabilities?
It does surprise me a little bit. I think there are a couple of things at work. I mentioned that customers manage their networks through a portal. Some of our very large customers will want to have their own network management tools tap directly into our network through APIs. We expect to do that. We just haven’t gotten there yet. In fact, I expect at some point we will federate our SDN Layer 2 network with other carriers that have SDN Layer 2 networks. It’s a very natural evolution. And federating the network with a large customer’s network management tools is just another version of that. It’s more of a northbound API as opposed to an east-west interface.

I also think we have a service now that hunts very effectively against competitors that have been attacking us down market.

Going back to bandwidth flexibility, obviously the seasonal use cases make sense, but are other customers tweaking the settings more or less often than you would have expected?

Less often than I expected. When we started we limited the number of changes allowed to one per day because we had no idea what the actual behavior would be. What we’ve seen is customers aren’t going in and ratcheting it up and down frequently during the month. They may do it once or twice, but it’s not as prevalent as I would have expected. It’s still early, though. Since we’ve really been at scale only since April, it’s a bit early to say how much the behavior is going to shift.

What are the increments you can scale up and down?
It goes all the way from 2mbps to 10Gbps with several increments along the way.

You said you’re in 170 markets. How many states is that?
Our incumbent AT&T 21-state footprint. We do have fiber assets in other areas, including New York, Philadelphia and Boston. Those markets are not available yet on Network on Demand, but we expect that we will bring them on net in the future. For now the Ethernet on Demand service is limited to our 21-state footprint and the 170+ markets.

Internet on Demand is up next. What comes after that?
The next capability after Managed Internet is what we call Network Functions on Demand. Network Functions on Demand is basically re-looking at how premise-based appliances are used. Typically today you have purpose-built appliances, whether that’s a router or a firewall, a WAN accelerator, you name it. In the future we will offer what we call universal CPE that can run multiple virtual functions, so software instances of those capabilities on that universal CPE platform.

We are also building the ability to deliver those same types of virtual functions directly through our AT&T cloud. So you can envision a time where customers will have a series of capabilities that are delivered both through a universal CPE on their premise — again, things like router functionality, firewalls and WAN accelerators — as well as capabilities delivered directly through our cloud.

There would be advantages to using one versus the other. For example, an application that is going to be shared across multiple locations, you probably want to use more of a cloud approach, whereas an application that’s more specific to a location will sit on the universal CPE.

That capability will begin to roll out in the next few months. It will evolve dramatically over time. The first instantiation of it will be that universal CPE capability with a virtual router. Then we will add other virtual instances within the portfolio over time, and I expect it will evolve pretty dramatically throughout 2016.

I presume the universal CPE is a server that you manage?
Yeah, it’s a white box x86 server modeled to run three to four virtual functions. It’s got a Gig of throughput, so it’s a fairly robust platform. The box will be managed by AT&T. The virtual router will be managed by AT&T. But I expect over time you’re going to have multiple virtual functions on this box and we will have options for both AT&T managed as well as customer managed functions.

Will the virtual functions be available from different suppliers?
Yes, it’s an open platform and an ecosystem of partners. We’ve announced some partners and the ecosystem will expand over time. We’ve announced Juniper, Cisco, Brocade.

The appeal to the customer is fewer appliances to manage?
There are different value propositions with the universal CPE concept. One is you go from having multiple boxes to having one box. That’s a big deal. Just from power consumption and having less to worry about, that’s a big deal.

The other thing that is important is, because the functionality is being delivered through software that can be downloaded at any time, the issue around box obsolescence is less of a problem over time. And the installation cycle time agility plays out here as well. Historically, if we were to install multiple boxes on a customer premise, that typically happens sequentially and it may take 30 days for the first one and upwards of 90-120 days all told. In the future this is a plug-and-play model.

Does NetBond fit into this picture?
As we talk about the AT&T SDN story, NetBond is an element of that story. NetBond is basically secure connectivity to third-party clouds. Today if a customer wants to take advantage of NetBond and AT&T’s Switched Ethernet on Demand, they can. In the future I expect they will become more and more integrated and it will just be an extension of an overall on-demand experience. They both can be used today but they’re not fully integrated through one management pane of glass.

Click here to view complete Q&A of 70-246 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-246 Training at certkingdom.com

Cryptographic key reuse is rampart in European payment terminals, allowing attackers to compromise them en masse

Some payment terminals can be hijacked to commit mass fraud against customers and merchants, researchers have found.

The terminals, used predominantly in Germany but also elsewhere in Europe, were designed without following best security principles, leaving them vulnerable to a number of attacks.

Researchers from Berlin-based Security Research Labs (SRLabs) investigated the security of payment terminals in Germany and were able to use them to steal payment card details and PIN numbers, hijack transactions and compromise merchant accounts. They plan to present their findings at the 32nd Chaos Communication Congress (32C3) later this month.

INSIDER: 5 ways to prepare for Internet of Things security threats

According to Karsten Nohl, the founder and chief scientist of SRLabs, most terminals in Germany use two communication protocols, ZVT and Poseidon, to talk with cash registers and payment processing providers respectively.

Both of these protocols have features that can be abused by hackers, but the problem is further exacerbated by poor design decisions by payment terminal manufacturers, like the reuse of cryptographic keys across all devices.

The ZVT protocol is used by around 80 percent of payment terminals in Germany to communicate with cashier workstations, SRLabs estimates. It was originally designed for serial connections, but it’s now used mostly on TCP/IP networks. This means that on local networks attackers can use techniques such as ARP spoofing to position themselves between terminals and cashier stations in order to intercept and send ZVT commands.

Some of the ZVT traffic is unencrypted, according to SRLabs. For example, a man-in-the-middle attacker can use the protocol without authentication to read the information stored on the magnetic stripes of payment cards inserted into payment terminals.

The protocol also has a mechanism that allows requesting and obtaining a card’s PIN number as well, but such requests need to be signed with a message authentication code (MAC). The MAC is verified using a key that’s typically stored inside the payment terminal’s hardware security module (HSM), a special component designed for secure key storage and cryptographic operations.

The problem is that most terminals, regardless of manufacturer, share the same signature key, violating a basic principle of security design, Nohl said.

The HSM in some terminal models is vulnerable to so-called timing side channel attacks that can be used to extract the key within minutes after gaining access to the terminal through a JTAG debugging connection or a remote code execution flaw, he said.

Attackers can easily find and buy such vulnerable terminals on eBay. Once they extract the key from it, they can use it against most other devices, including newer models, because of the pervasive key reuse among payment terminal manufacturers in Germany.

Terminals used in other countries, especially in Europe, use a different communications protocol called OPI (Open Payment Initiative) that is similar to ZVT, but lacks the remote management functionality that attackers can abuse.

However, some terminal manufacturers added proprietary extensions to OPI to implement that functionality, because they like the comfort of remote management, Nohl said. “At least we’ve seen this in a few cases. We can’t guarantee that it’s widespread, but every implementation of OPI that we’ve looked at had extensions that brought back remote manageability, and like in ZVT, it wasn’t secure.”

With magnetic stripe data and associated PIN numbers attackers can clone payment cards and perform fraud, even in countries where chip-protected (EMV) cards are widely deployed.

EMV-capable terminals still support magstripe-based transactions for cards that don’t have a chip, and verifying whether the card has a chip or not is usually done by checking a specific bit stored on the magnetic stripe. So an attacker can simply change that bit on his cloned card, Nohl said.

Another attack that the SRLabs team found possible through ZVT is to force a terminal to associate with a different merchant account, like one controlled by a hacker, and which would receive all the money from transactions performed through that terminal.

This can be done by a man-in-the-middle attack through a password-protected command that instructs the terminal to change its ID to one that the payment processor associates with a different merchant. The password is the same for all terminals tied to a specific processor, the SRLabs researchers found.

When the terminal ID changes, the processor will send a new configuration back to the terminal including the new merchant’s transaction limits and banner — the merchant identifying information that appears on the printed receipts. The attacker can actually intercept this information and change it so that receipts retain the old merchant’s banner, while the money is funneled to the different account controlled by the attacker.

A third attack is possible through the Poseidon protocol that’s also widely used in Germany and in some other countries like France, Luxembourg and Iceland. This protocol is used by terminals to communicate with the backend servers of payment processors and is a variation of an international standard called ISO 8583.

Payment terminals require a secret key to authenticate with payment processors over the Poseidon protocol. However, like with ZVT, payment terminal manufacturers implemented the same authentication key across all of their terminals, SRLabs found.

This error can be abused to steal money from merchant accounts. While most transactions add money to such accounts in exchange for goods or services, there are a few that can cost merchants money, for example transaction refunds or top-up vouchers like those used to recharge prepaid SIM cards.

In the worst case scenario, attackers could hijack terminals and use them to issue refunds to bank accounts under their control from thousands of merchants by simply iterating through terminal IDs, which are usually assigned incrementally.

Nohl said that SRLabs performed a demonstration of the attacks for payment terminal manufacturers. Their response was that they haven’t seen this type of fraud outside of a laboratory setting, but that they’re working to address the issue, he said.

The people who implemented these protocols, which were developed independently from each other, didn’t understand how to do proper key management in both cases, Nohl said.

Fortunately, there is functionality in them that allows older keys to be replaced with new ones and which could be used to provide every terminal with its own unique key, as long as the backend servers are also modified to support such a deployment, the researcher said.

The terminals would still be vulnerable to remote code execution or timing side channel attacks, but at least extracting a key would restrict the abuse to a single terminal, not hundreds of thousands.

In the short term, it’s paramount to change existing keys with unique ones for every terminal, but in the longer term better standards should be designed that rely less on the security of the terminals themselves. This could be done by implementing things like public-key cryptography instead of symmetric-key algorithms, Nohl said.

Click here to view complete Q&A of MB2-706 exam

Certkingdom 20% Discount Promotion Coupon Code: 45K2D47FW4

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MB2-706 Training at certkingdom.com

All the facts worth knowing about IT leaders’ tech budgets, spending plans, hiring priorities and strategic initiatives for 2016.

Ready, set, disrupt!

If an overarching conclusion can be drawn from the results of Computerworld’s Forecast survey of 182 IT professionals, it’s that 2016 is shaping up to be the year of IT as a change agent.

IT is poised to move fully to the center of the business in 2016, as digital transformation becomes a top strategic priority. CIOs and their tech organizations are well positioned to drive that change, thanks to IT budget growth, head count increases and a pronounced shift toward strategic spending.

Amid the breakneck pace of change in technology and business alike, where should you direct your focus in the new year?

Read on for key highlights and data points on budgeting, hiring, business priorities and disruptive technologies that promise to define the IT landscape in 2016.
Computerworld Tech Forecast 2016: Tech Spending Continues to Rise

IT budgets on the rise…again
As companies continue to rely upon technology to help differentiate themselves in the marketplace, tech budgets remain on an upward trajectory.

Almost one half (46%) of respondents to the Forecast 2016 survey indicated that their technology spending will increase in 2016, on average by 14.7%. (By comparison, last year 43% said spending would increase, on average by 13.1%.)

Close to an equal number (42%) reported that their technology spending will remain the same, with only 12% anticipating a decrease in IT budgets.
Computerworld Tech Forecast 2016: Budget Booms and Busts

Security, cloud computing are top areas for investing
With security concerns top-of-mind for IT professionals as they gear up for 2016, it’s no surprise that exactly half of respondents chose security as the top area where their companies plan to increase spending.

Cloud computing came in a close second, and the top area where organizations plan to decrease spending is on-premises software — both of which indicate that companies’ journey to the cloud will continue in 2016.

IoT tops new areas of spending for 2016
After several years of languishing in the tech hype cycle, the Internet of Things finally looks to be commanding tech execs’ attention, with 29% of respondents identifying it as a new area of spending for 2016.

Green IT, which likewise had been back-burnered at many organizations, popped up on respondents’ radars as well, with 16% saying energy-saving technologies will be a new spend for them in the year ahead.

IT pros’ No. 1 challenge: Budgeting
As they do every year, budget constraints top the list of leadership challenges identified by survey respondents.

Security came in second among IT pros’ concerns after a year of ever bigger and more serious corporate hacks.

Sam Redden, chief security officer at Brazos Higher Education Service, a Waco, Texas-based student loan servicing company, sums up the feelings of many IT leaders when he says, “I wouldn’t be foolish enough to say I stay ahead of the bad guys. The bad guys stay ahead of everybody.”

Dueling goals for IT in 2016
Survey respondents’ goals for their most important tech projects betray the bimodal nature of the modern IT department.

Tech leaders say they’re striving to maintain or improve service levels, long one of IT’s core responsibilities. At the same time, they’re seeking to generate new revenue streams or increase existing ones, a new responsibility in most evolving technology departments.

“As technology becomes an integral part of every aspect of business and the way we interact with customers, it’s raising the profile of the IT group and forcing IT to think about more than just keeping the lights on,” says David Cearley, a fellow at Gartner. “We are seeing greater alignment as IT steps up to drive digital business.”

A piecemeal journey to the cloud
Heading into 2016, cloud computing shows no signs of slowing down, as tech leaders indicate that spending and new cloud initiatives remain on the upswing.

In terms of where organizations are in their cloud transition, 29% of survey respondents confirmed they had already moved some enterprise applications to the cloud, with more to come, while 7% said they’re in the process of migrating mission-critical systems to a cloud environment.

Interestingly, a full 20% of respondents are bucking the trend entirely, reporting they’re not moving to the cloud at all.

IT staffs to increase in 2016
As budgets rise and projects abound, many firms are looking to increase IT head count. Some 37% of survey respondents said they’re planning to increase staff levels, up from 24% last year.

In keeping with IT’s new role as an organizational agent of change, 42% of survey respondents with hiring plans are in search of people with combined tech and business backgrounds that will allow them to articulate the value of IT in meeting business goals.

Architecture, app dev among most wanted skills
The list of most in-demand IT skills starts off with a surprise. Although IT architecture is a fundamental area of expertise for techies at all levels and in various roles, it rarely makes anyone’s list of hot skills.

The term “IT architect” encompasses a wide range of specialists, from enterprise architects to cloud architects, so recruiters say it makes sense that IT architecture expertise is in demand as companies move forward with all sorts of technology-driven projects.

Beyond that, application development, project management, big data, BI, help desk and cloud all remain high on hiring managers’ lists as IT gears up for the year ahead.

(Download and save or print a free PDF of Computerworld’s top tech skills for 2016.)

John Reed, senior executive director of IT staffing firm Robert Half Technology, says those hiring managers could be facing a challenge. “The IT market has been really strong, and we’re expecting it will stay that way for the foreseeable future,” he says. “I don’t think you’ll see explosive growth, but you’ll see single-digit growth in demand, consistent with what we’ve seen over the past few years.”

Security, BI talent expected to be scarce
With all eyes on security in the coming year, it’s little surprise that survey respondents expect to have a difficult time hiring technolgists with that expertise.

According to Robert Half Technology’s 2016 Salary Guide, salaries in the security field will rise about 5% to 7% next year, ranging from $100,000 on up to nearly $200,000 on average.

Disruptive technologies 3 – 5 years out
When asked what technologies are likely to have an impact in the next three to five years, survey respondents chose cloud computing/software-as-a-service by a wide margin, followed by self-service IT, predictive analytics, the Internet of Things and unified communications.

The cloud will continue to reshape enterprise IT, according to research firm IDC, which predicts that more than half of enterprise IT infrastructure and software investments will be cloud-based by 2018. Specifically, spending on public cloud services will grow to more than $127 billion by 2018, according to an IDC forecast report.

Kicking the tires on new technologies
All manner of virtualization and “as-a-service” options topped survey respondents’ lists of technologies being piloted or beta tested at their organizations, with BI/analytics, cloud computing and mobile/wireless rounding out the top five.

“Virtualization 2.0” is of particular interest to survey respondents, as companies move beyond the first steps of server virtualization to explore virtualized desktop, storage, mobile and network options.

2016 is IoT’s year to shine
In 2016, the Internet of Things (IoT) will no longer be the stuff of science fiction, but rather a near-future reality for IT organizations across many industries, observers say.

In Computerworld’s Forecast 2016 survey, 29% of the respondents identified IoT initiatives — and related machine-to-machine and telematics projects — as new areas of spending for the year ahead. In comparison, just 12% of those polled last year said IoT work would be a new IT expenditure in 2015.

Likewise, the percentage of respondents who said they planned to launch IoT projects over the next 12 months rose from 15% last year to 21% this year. Additionally, 14% of this year’s respondents said they plan to beta-test IoT technologies, up from 7% last year.

Wearables in the enterprise? Not so much
While consumer-oriented wearable devices like Google Glass and the Apple Watch launched to great fanfare, the reality is that enterprises aren’t ready to make practical use of wearable systems, at least for the foreseeable future.

Wearable technology was last on the Forecast 2016 list of systems currently being assessed in beta tests and pilot projects, with only 4% of respondents saying they had projects underway involving wearables.

Furthermore, 78% said they were not currently working on wearable apps or anticipating the need to support wearables in the near future. And only 8% of those polled said wearables would play a role in their business or technology operations, while just 12% indicated that they were adjusting their mobile device management strategies to include wearables.

Click here to view complete Q&A of MB2-706 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MB2-706 Training at certkingdom.com

Have you implemented policies to ensure your business is risk-ready?

Data breaches are serious and very real threats in today’s digital world, and no industry sectors are immune. In the medical sector alone, the cost of client data breach liability, expense, and settlements surpassed the same costs from medical malpractice. Securing data and minimizing the probability and impact of data breaches is at its core a risk-based endeavor.

While many businesses have recognized the need for risk assessment and management, there is still a tendency to treat risk assessment and managements as “checkbox” exercises. For a risk management program to provide true benefit, several things are required:

An enterprise-level risk management practice. This is NOT your IT risk management team – it is a standalone and empowered practice that operates at the CXO level. This team is focused on business alignment.
An IT-level risk management practice. This team is focused on the application and testing of applicable risk management frameworks and the controls associated with those frameworks.
Certified and qualified risk management professionals. There are several industry certifications available. CRISC (Certified in Risk & Information Systems Control) and CRMP (Certified Risk Management Professional) are examples. They both require hefty amounts of continuing education, which is critical, given the moving target that cybersecurity has become.

Too often we see businesses with some partial combination of these elements, but we rarely see them address the complete picture.
4 Ways to Approach Risk

Risk assessment doesn’t need to be an enigma. Once risks are identified, they can only be dealt with one of four ways, with the selection for each risk factor to be determined with a business-alignment mindset:

Accept the risk. This is appropriate for risk factors with low probability and low impact.
Avoid the risk. Patient: “Doctor, my arm hurts when I do this!” Doctor: “Well then, don’t do that!” In all seriousness, this means that the organization shouldn’t engage in business activities not aligned with their primary mission or outside their area of primary expertise. This is appropriate for risk factors with high probability and high impact.
Transfer the risk. This is appropriate for risk factors with low probability but high risk. Examples are insurance policies and outsourcing of high-capital expense or high-expertise elements such as data center services. (Disclosure: I work for Lifeline, a provider of data center facilities and services.)
Mitigate the risk. This approach is appropriate when the high probability but relatively low risk. Additionally, if you happen to be a service provider that other organizations transfer risk to (like a data center provider) you are the last stop for risk, and you must find ways to mitigate it.

Obviously, the parsing of risk factors into their appropriate action buckets is a complex process requiring knowledge of the threats themselves, the technology involved, business alignment, vendor capabilities, actuarial data, etc.

Clearly, the ones that avoid it or accept aren’t setting themselves up for success. Being proactive instead of reactive is key to ensuring you cover as many vulnerabilities as possible.

On the other hand, many businesses realize they don’t have the staff, objectivity, time, or the money to allocate to risk management. These can be barriers to success, along with the other ego factors, including politics, turf wars, and ambition. Therefore, the most popular option out of these four is transferring that risk onto someone else, which effectively takes care of option number four: mitigating risk altogether.

The biggest benefit of this option is that hiring outside help can be the most cost-effective option, given that the cost of attracting certified risk management professionals and getting certifications for your business could be upwards of $1 million. And it takes time and resources, which translates into overhead costs. When in doubt, I always recommend transferring the responsibility to mitigate risk more effectively.
Implementing Risk Management

Before you can develop a risk management practice that makes sense, you need to assess where you currently stand. Instead of trying to assess the situation yourself, it’s important that you hire a third party to complete a risk assessment of your business that spares no detail. Thoroughness is an advantage; the more you know, the more you can mitigate risk.

The next decision you need to make is whether or not you want to eat the cost and handle it internally, or if you want to transfer that risk to an outsourced party.

Finally, regardless of whether you keep it in-house or transfer your risk, you do need to dedicate resources to your risk management practice so you can mitigate vulnerabilities as much as possible.

The consequences of not understanding and addressing your risks can be dire – from not being able to attract quality talent to destroying your reputation and credibility to going out of business.

Are you risk-ready?

 

Click here to view complete Q&A of 70-354 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-354 Training at certkingdom.com

 

From unstructured data mining to visual microphones, academic labs are bringing future breakthrough possibilities to light

If you take a look at the list of trending repositories on GitHub, you’ll see amazing code from programmers who live around the world and efforts for firms big and small. But one thing you don’t often see is work that comes from the university labs. It’s rare for the next big thing to escape from an academic computer science department and capture the attention of the world.

That’s not a knock on university research. But competing with open source projects that enjoy broad support across the industry and around the world is challenging for a handful of academics and grad students. Sure, many of the top computer science schools are well off, but that doesn’t mean the money is pouring into research. Open source programmers, on the other hand, can usually build better code faster, often because their have bosses who pay them to build something that will pay off next quarter, not next century.

Yet good computer science departments still manage to punch above — sometimes well above — their weight. While a good part of the research is devoted to arcane topics like the philosophical limits of computation, some of it can be tremendously useful for the world at large.

What follows are nine projects currently under development at university labs that are worth your attention. They may not be the absolute best or furthest along, but each has the potential to have a broad impact on the world of computing. Some offer shipping code, others offer mostly potential, but all offer a straightforward path for transforming our world with useful computation.

DeepDive

Big data is one area where academia’s focus on mathematical foundations can pay off, and one of the more prominent packages to gain attention of late is DeepDive, a tool for exploring unstructured text. While many big data projects work with well-structured information that’s already in tables, DeepDive focuses on finding correlations in raw text files and other files that aren’t organized.

The Java code runs a pipeline that pushes the raw data through a set of tools that parses natural language into streams of entities — that is, people, places, companies, or things. Then it uses statistical algorithms to search for connections among the entities, even if they’re not explicitly spelled out. These results are then boiled down to clear inferences and inserted into an old-school database.

The results vary depending upon the style of the text, the nature of the query, and the clarity of the writing, but in good circumstances the tool can deliver better results than humans can. The developers even report that some studies have shown that DeepDive “exceeded the quality of human volunteer annotators in both precision and recall for complex scientific articles.”

ZeroCoin
Bitcoin may be many things, but it is not as anonymous as many assume. The system tracks all transactions, so it’s possible to trace a single coin from the date it was born, through every owner, to its current one. ZeroCoin wants to change that. The proposed system will establish a parallel world where coins will enter and leave, erasing the trail. It promises privacy and security in one.

The system establishes a new temporary currency called a ZeroCoin that’s kept in a big, anonymous pool that doesn’t track ownership or provenance. The true owner can spend the coin by creating a zero-knowledge proof that establishes their rightful control without revealing their identity. The coin is then removed from the anonymous pool and converted back into a regular bitcoin.

“Our goal is to build a cryptocurrency where your neighbors, friends, and enemies can’t see what you bought or for how much,” ZeroCoin’s developers say.

Burlap lets you define the problem as a network of nodes with vectors of features or attributes attached to it. The algorithms can search through the network using a combination of brute-force searching and statistically guided exploration. The higher level of the algorithm plans the search and deploys the best algorithms. The toolkit includes dozens of the most useful algorithms for agent-based search.

The tool is useful for data-driven worlds where the data can be mapped into a large collection of nodes or objects. The code is written in Java and includes a large assortment of debugging and profiling tools that are useful for keeping the code moving toward the optimal goal.

SpiroSmart
The smartphones may let us talk, text, and even watch cat videos, but their greatest contribution to society may be as mobile doctors, ready to track our health, day in and day out. Among the hundreds of new apps for tracking our bodies is SpiroSmart, a software program that analyzes our lungs by listening to us breathe and measuring the echoes and reverberations.

The traditional medical test called a spirometer requires people to breathe through a tiny windmill that measures the intensity. Using a microphone reduces the danger of contamination and makes it possible for people to test their breathing discretely throughout the day.

The project is one part of a collection of tools analyzing lung health. Another tool, CoughSense, will record the number and severity of “cough episodes” during a day. It replaces specialized equipment or paper logs. Another approach, WiiBreathe, watches the distortion of Wi-Fi signals in the 2.4GHz range as they pass through the body and the lungs. It can track breathing within “the accuracy of 1.54 breaths per minute when compared to a clinical respiratory chest band.” All promise to reduce the need for specialized hardware, making testing simpler and more effective for all users.

Halide
As digital photography becomes more common, it’s only natural that people will want to do more to their images than merely look at them. Some want to filter the colors, others want to edit the images, and still more want to use the images as input to some algorithm, perhaps for steering an autonomous car.

All of these algorithms require loops — lots and lots of nested loops churning through the rows and columns of pixels. It turns out that being careful with the design of your algorithm by paying attention to the caching of data when structuring these loops can make a big difference in speed. If you want to convert your algorithm to run on a GPU, you’ll need to rethink all of these algorithms again.

Halide is a computer language for image processing designed to abstract away these decisions for you. It will worry about the loops and GPU conversions for you. If you write the instructions for analyzing a single pixel, it will produce fast code for churning through the entire image.

Visual Microphone
Cameras have traditionally been used to take static photos of things to save for the future. The things might be moving when the shutter snaps, but after that, they’re frozen for eternity like people on a Grecian urn. They do what your eyes do by capturing light forever.

Now that superfast cameras can capture hundreds or thousands of images per second, researchers are discovering that the cameras can do more than imitate the eyes. They can also do what our ears and skin can do by sensing sound or vibration using light alone.

The Visual Microphone project uses a series of images to detect small movements in an object. In the demonstration video, Visual Microphone watches for tiny movements that a crinkly potato chip bag creates when sound hits the bag. The vibrations may be very slight, but they’re enough for the software to recover a reasonable approximation of the sound.

The team is applying the same general idea to other problems like determining whether a building or a bridge is stable and safe. They can use a sequence of images from a windy day to look for small or not so small changes in the building. Dangerous resonant vibrations may not be large enough to be seen by a human or even felt, but the camera can flag them.

The idea is simple enough to spawn a number of other sensors. Cameras can take our pulses by tracking the flow of blood through the subtle blushing of the skin. Video rib monitors can count the breaths of an infant by watching the expansion of the chest. In these cases, the camera is not only more efficient, but safer because it doesn’t make contact and works from a distance.

Drake
Robots and drones are becoming more and more common in the enterprise as they move from the labs and take on crucial roles. Controlling these machines requires a good grasp of the laws of physics. Drake is a collection of packages that makes it a bit easier to write the code controlling these machines.

The code delivers a number of basic and not-so-basic models for predicting how your robot will move. You can begin rigid body models, layer in aerodynamic results, and feed it all into a dynamic control algorithm. There’s also a complement of visualization tools to debug your code and watch how it behaves.

Institution: Massachusetts Institute of Technology
GitHub: https://github.com/RobotLocomotion/drake/wiki

R

Anyone who’s spent time with big data or data scientists knows that they rely, more often than not, on a language called R to chew through the numbers and deliver the kind of statistical insights that make managers happy. Whether it’s marketing, risk management, scheduling, or any of host of other jobs for keeping an enterprise running, R is tuned for the statistical analysis that prove or disprove a hypothesis.

Popular
030215 fcc net neutrality tom wheeler
As AT&T falls behind T-Mobile in streaming, an executive blames net neutrality
raspberry pi zero
The $5 Raspberry Pi Zero is one elusive stocking stuffer
free tech software storage
19 free cloud storage options

Education
Now, saving the best for last, is the one thing that universities do better than anyone: teach. All of these projects are nice, but many schools are also open-sourcing and sharing their courses. They’re sharing the course materials, streaming video lectures, and even organizing the kind of study groups and grading sessions that turn a lecture or a book into a full course.

There are dozens of good courses, so it’s possible to knit together a complete degree for free (or a low cost). These two GitHub repositories are pointers to a few of the real courses out there. Drink deeply because you won’t be limited by, say, tuition.

Click here to view complete Q&A of 70-354 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-354 Training at certkingdom.com

 

Outside Building 99 in Microsoft’s Redmond, Washington, campus. Credit: Microsoft
Sysadmins can now turn on the feature in System Center Endpoint Protection and Forefront Endpoint Protection

It’s time to throw adware, browser hijackers and other potentially unwanted applications (PUAs) off corporate networks, Microsoft has decided. The company has started offering PUA protection in its anti-malware products for enterprise customers.

The new feature is available in Microsoft’s System Center Endpoint Protection (SCEP) and Forefront Endpoint Protection (FEP) as an option that can be turned on by system administrators.

PUA signatures are included in the anti-malware definition updates and cloud protection, so no additional configuration is needed.

Potentially unwanted applications are those programs that, once installed, also deploy other programs without users’ knowledge, inject advertisements into Web traffic locally, hijack browser search settings, or solicit payment for various services based on false claims.

“These applications can increase the risk of your network being infected with malware, cause malware infections to be harder to identify among the noise, and can waste helpdesk, IT, and user time cleaning up the applications,” researchers from the Microsoft Malware Protection Center said in a blog post.

System administrators can deploy PUA protection for the specific anti-malware product version in their organization through the registry as a Group Policy setting.

Microsoft recommends that this feature be deployed after creating a corporate policy that explains what potentially unwanted applications are and prohibits their installation. Employees should also be informed in advance that this protection will be enabled to reduce the potential number of calls to the IT helpdesk when certain applications that worked before start being blocked.

If the network is already likely to have many PUA installations, it’s recommended to deploy the protection in stages to limited number of computers in order to see if any detections are false positives and to add exclusions for them. Exclusion mechanisms based on file name, folder, extension and process are supported, the Microsoft researchers said.

 

Click here to view complete Q&A of 70-336 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-336 Training at certkingdom.com

Microsoft risks IT ire with Windows 10 update push

Written by admin
November 8th, 2015

Its OS-as-a-service could create headaches for shops used to a slower upgrade pace

Microsoft has made it clear that it will take on a greater role in managing the Windows update process with Windows 10. The company has also made it clear that it will aggressively push users — both consumers and businesses — to upgrade from Windows 7 and Windows 8 to its latest OS. With that in mind, it’s hard to image either predecessor hanging around anywhere near as long as Windows XP.

The decision to not only push updates out, but also ensure that all Windows 10 devices receive them in a timely fashion, fits well with the concept of Windows as a service. The change may even go unnoticed by many consumers. IT departments, however, are keenly aware of this shift — and many aren’t happy about it.

Managing Windows updates — old vs. new

Traditionally, Microsoft has given IT the final word on patches and updates. While most departments do roll out critical patches and major updates, they do so on their own time frame and only after significant testing in their specific environment. This ensures that an update doesn’t break an app, a PC configuration or cause other unforeseen issues. If an update is required that could introduce problems, IT can then develop a plan to address the issue in advance of deployment. Some updates might even be judged as unneeded and never get deployed.

With Windows 10, Microsoft is adopting a service-and-update strategy based on a series of tracks known as branches. In this model, both security and feature updates are tested internally and made available to Windows Insiders. When Microsoft feels the updates are ready for primetime, they’re pushed to the Current Branch (CB). CB devices, predominantly used by consumers, receive the updates immediately through Windows Update.

Businesses and enterprises typically fall under the Current Branch for Business (CBB). Like CB devices, CBB hardware will be able to receive updates as soon as they are published, but can defer those updates for a longer period of time. The rationale for this extra time is two-fold. First, the updates will have received extra scrutniy because they have been tested internally, by Windows Insiders and by consumers via the CB so any issues will likely be resolved, or at least identified, during that time. Second, it gives IT shops time to test the updates and develop strategies to deal with potential problems before those updates become mandatory.

Complicating the situation: There are still unknowns about how IT departments will handle the CBB update cadence and process. Microsoft has yet to complete Windows Update for Business (WUB), a set of features and tools that will be made available to organizations that have adopted the CBB update pace. There is also the possibility of using other tools, including Windows Server Update Services (WSUS), Microsoft’s System Center Configuration Manager (dubbed “Config Manager”), or a third-party patching product that can handle longer postponements.

IT pros aren’t happy

This marks a massive transition in how Windows is deployed, updated and managed in enterprise environments. Many longtime IT pros won’t be comfortable ceding this much control to Microsoft. Susan Bradley, a computer network and security consultant known in Windows circles for her expertise on Microsoft’s patching processes, has become a voice for those IT workers.

In August, Bradley kicked off a request on the matter using Microsoft’s Windows User Voice site asking for a more detailed explanation of the Windows 10 update process. Last month, she upped the ante by starting a Change.org petition demanding additional information from Microsoft as well as a change to how it will deliver updates. As of this week, the petition has more than 5,000 signatures; some signers have noted that they will refuse to move their organizations to Windows 10 unless changes are implemented.

Change.org petition for Windows 10 Change.org

A Change.org petition that has collected 1,600 signatures asks Microsoft CEO Satya Nadella to make his Windows 10 team provide more information to users about updates, and give customers more control over what they install on their PCs.

The impact of the petition remains to be seen. Microsoft has already established that it views its new Windows-as-a-service model, with frequent incremental updates using the branch system, as the future. Windows 10 has already passed the 132-million PC mark and Microsoft appears unapologetic about its plans to pressure users into upgrading to the new OS. All of these factors make it unlikely the company is going reverse course.

This isn’t entirely new territory

The new approach to update management is striking compared to the process for previous Windows releases, but it isn’t exactly a new model. iOS, Android and Chrome OS all limit IT’s ability to manage the update process to one degree or another.

Apple has always placed the user at the center of the iOS upgrade process. When an update becomes available, users can download and install it on day one. iOS 9 introduced the ability for IT to take some control over the process, but only in the opposite direction — allowing IT to require that devices be updated, a move designed less to ensure IT management of the overall process and more to ensure that iPhones and iPads are running to latest, and therefore most secure, version of iOS.

Things are a bit murkier with Android because each manufacturer and carrier generally has to approve the updates and make them available to users, though ultimately it remains up to the user to upgrade when an update becomes available. The update challenge for Android in the enterprise is less about preventing an update and more about the uncertainty of when (or if) devices can be updated.

Chrome OS is essentially updated by Google across all of the devices running it. This is the most apt comparison to Microsoft’s plans for Windows 10. The big difference is that Chromebooks are little more than the Chrome browser and are designed primarily for working with data in cloud-based services. Although the devices do have local storage and support for some peripherals, they are extremely uniform compared to any other major platform (which makes them easier to manage than rivals).

This isn’t to say that IT professionals have always been happy about these platforms or their upgrade processes. iOS and Android were met with skepticism and even hostility by many IT departments. As the platforms have matured into true enterprise tools and it’s become clear they are a necessary part of the enterprise computing landscape, IT has had to adapt to the realities associated with supporting, securing, and managing them.

Part of that adaptation is to the way these platforms get updated.
iOS is a great example of how IT departments already deal with being shut out of a platform’s update process.

With iOS, IT gets very limited lead time about major updates (typically about the three months between Apple’s Worldwide Developers Conference in June and the public release later that same fall). Many IT shops now realize that the next version of iOS will arrive for their organizations the day it’s released. As such, it’s common practice to download and test the developer preview builds through that period to ensure smooth operation on day one. Similarly, many IT departments keep up to date on the previews of minor iOS releases throughout the year.

Microsoft’s update process is going to require a similar adjustment. If Microsoft won’t back down on its position that regular cumulative updates of Windows is the future, IT will need to take a similar approach to Windows that it uses with other platforms.

Windows is not iOS

One major difference between iOS and Windows 10 is that Microsoft still allows updates to be deferred by IT. This means that IT departments have greater lead time for testing and developing plans to address potential pitfalls. Even if IT shops rely solely on the CB release, there is expected to be up to eight months to prep before an update becomes mandatory for CBB PCs and devices. Windows Insiders will get an even longer lead time, since they will have access to updates before public release. In effect, Microsoft is striking a middle ground between Apple’s approach and the approach used in previous Windows versions.

That longer lead time, of course, isn’t a luxury. Windows deployments can be significantly more complicated than those for iOS or Android and almost universally there are more PCs than mobile devices in an organization. Still, using an iOS update strategy as a blueprint is a good starting point for figuring out how to approach Microsoft’s planned Windows 10 update process at work.

It’s also worth noting that IT departments do have some time to develop that strategy. Although Microsoft is clearly ushering anyone and everyone it can onto Windows 10, there’s little need for enterprises to make the switch from Windows 7 immediately — particularly for those that only recently made the jump from XP to 7. Delaying a transition or focusing only on a proof-of-concept or pilot project allows IT departments to get a handle on everything related to Windows 10 before rolling it out, including how to handle updates.

Ignoring Windows 10 isn’t an option

Although it’s possible to delay a Windows 10 transition, perhaps even for years, enterprises are eventually going have to bite the bullet.

Putting off the move is perfectly logical, particularly until the core capabilities to manage Windows 10 and its update process are established. That doesn’t mean, however, that this is a time to be complacent and ignore it completely. Sooner or later, virtually every organization will need to reckon with Windows 10 (or perhaps migrate to non-Windows platforms, which would pose an entirely different set of challenges).

Preparing for that reality, even while pushing back against Microsoft’s current plans, is critical to eventually making a smooth transition.

 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

The evil that lurks inside mobile apps

Written by admin
October 31st, 2015

The evil that lurks inside mobile apps

The Enterprise is at risk from malware and vulnerabilities hiding within mobile apps. You have to test your mobile apps to preserve your security.

Mobile apps are ubiquitous now, and they offer a range of business benefits, but they also represent one of the most serious security risks ever to face the enterprise. The mixing of devices and software for work and leisure opens up many potential avenues for attack, but even purpose-built enterprise apps are shipping with woefully inadequate security protections.

Defects and vulnerabilities commonplace
Did you know that mobile apps typically ship with between one and ten bugs in them?

According to research by Evans Data, only five percent of developers claim to ship apps with zero defects, while 20% ship with between 11 and 50 bugs. Even when testing is conducted, it’s on a limited subset of devices and platform versions.

Many software developers simply don’t have the resources to conduct proper testing before release, especially with the pressure to reach the market faster than everyone else. It’s accepted that many defects will be discovered by customers and fixed later through updates, in fact 80% of developers push out updates at least monthly.

The chance of security vulnerabilities slipping through is very high. But that’s for an average mobile app developer, surely the enterprise takes security more seriously, right?

You may assume that mobile app security testing is a lot more stringent in the business world, but it’s a dangerous assumption to make. Enterprise app developers are subject to the same pressures, and they’re just as likely to forgo security in the rush to market.

BrandPost Sponsored by Adobe
For Optimal Data Security, Control Your PDFs

Yes, people make mistakes that can result in security breaches. But they will make far fewer of them…

Lack of security testing in the enterprise
Many organizations are still taking it on trust that the mobile apps they use are secure. We’ve looked at the importance of assessing third-party vendors before. Almost 40% of large companies, even in the Fortune 500, don’t take the necessary precautions to secure the apps they build for customers, according to research by IBM and the Ponemon Institute.

In fact, one-third of companies never test their apps at all, and 50% of the companies surveyed admitted they devote absolutely no budget to mobile security.

Consider that more than half of businesses are planning to deploy 10 or more enterprise mobile apps in the next two years alone, according to 451 Research. The potential risk here is enormous. More data breaches are inevitable. What’s worse is that many will go unnoticed for long periods of time. The impact on some businesses will be devastating, as security threats too often go ignored. To bury your head in the sand, is to expose your business to potential catastrophe.

Build in security and educate
If you’re only thinking about security at the end of app development, then you’ve already left it too late. You need to build in secure features and adopt stringent testing from day one. That means consulting or hiring security experts during the design phase, and empowering them to influence developers. Focus on data encryption, user authentication, and regulatory requirements.

Monitoring and reporting should be built in to your mobile apps. That way there’s an audit trail to maintain security. Reports can also produce all sorts of useful analytics that help guide future development in the right direction. It’s not just for security, it’s also an important part of ensuring ROI for mobile apps.

It’s worth noting that mobile security at a platform level is improving, but few developers are taking full advantage of the new features designed specifically to secure apps for the enterprise. There has to be some education here. Without input from InfoSec talent, and the right training for developers, there’s no doubt that insecure mobile apps will continue to flood the market.

There’s no substitute for testing
At the end of the day, you will never know if your mobile apps are truly secure unless you test them. Proper mobile security penetration testing is essential. External testers with no vested interest and the right blend of expertise, are best placed to provide the insight you need to uncover dangerous vulnerabilities, and help you mitigate them.

If development continues after release, as your mobile apps are updated with new features and defect fixes, make sure that you consider the security implications and test each new release properly – it’s the only way you can really be sure that your mobile apps are secure.

Click here to view complete Q&A of 70-342 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-342 Training at certkingdom.com

7 ways to ease stress at work

Written by admin
October 17th, 2015

Workplace stress is a fact of life, especially in the IT industry. Keeping that stress to manageable levels can seem like a full-time job in and of itself. Thankfully, there are some easy ways to relax, recharge and rejuvenate, and many of them you can do right at your desk.

Technology
Ironically, the same technology that’s causing you undue stress and frustration can also be used to help manage and reduce it. From biometrics and fitness trackers that monitor your heart rate, blood pressure and the number of steps you take each day, to resilience solutions that guide you to stress-reduction resources (like those from Concern and Limeade), technology is playing a huge role in helping workers chill out and relax.

Meditation
Meditation can be done anywhere, anytime, whether you’ve got five minutes or 50. Take some deep breaths and clear your mind in between meetings, or before a particularly stressful phone call. Meditate on the bus or the subway. You can try mantra meditation, where you silently repeat a word or phrase; mindfulness meditation, which focuses on the flow of your breath and on being conscious of the present moment, or some form of meditative movement like Qigong or yoga.

Exercise breaks
Does your workplace have an on-site fitness center? Use it. Is one of your employee benefits or perks a fitness center membership or reimbursement? Take advantage of that. Even a brisk walk around the block, or jogging up and down the stairs instead of taking an elevator can help get the blood flowing and help relax your mind and energize your body.

Tech time-out
It’s hard to manage stress when you’re constantly reading emails, your smartphone’s ringing off the hook, text messages keep flooding in and your to-do list keeps getting longer. Set aside a certain period of time each day for a tech time-out, says Henry Albrecht, CEO of employee wellness solutions company Limeade. Turn off all your electronic devices and focus on something other than a screen. You could even meditate during this time. You’ll be surprised how peaceful it can be.

Curb Caffeine
No one’s suggesting you give up your morning cup of Joe, but cutting down on caffeine intake, or setting a time of day when you stop drinking caffeinated beverages, can help you better manage stress. “Maybe after, say, 2 p.m., avoid anything with caffeine in it. That can affect your sleep later on in the evening, and if you aren’t well-rested, that will add to your stress,” says Albrecht.

6 sound sleep
Make sure you’re getting your rest, or you’ll be poorly equipped to manage stress. The general rule is eight hours, but some people function optimally on a little more or less. Figure out what works for you and stick to it. And don’t fall asleep in front of the TV, your tablet or your smartphone, either. Research shows that can affect the quality of your REM sleep and impact your rest.

Fix your finances
Financial issues can affect more than just your credit score – taking care of your financial health is critical to maintaining your overall physical and emotional health, too. If you’re struggling financially, check with your HR department to see if they have financial wellness and planning resources available. Or consult a financial advisor or debt consolidation organization. You should also check out free budgeting technology, like Mint, that can help track your spending.

Click here to view complete Q&A of 70-342 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-342 Training at certkingdom.com

How to use stipends to ensure BYOD success

Written by admin
October 3rd, 2015

There are real differences between stipend options, and the success of your program will depend on getting them right

Stipends are a way for businesses to reimburse employees for a portion of their wireless costs and, if implemented properly, address these common issues: cost, eligibility, control and taxes. Here’s how:

* Costs. When businesses talk about costs, they generally are referring to either time or money. And companies opting to use expense reports for stipends will find the task occupies a good bit of both. It’s time-consuming for accounting departments to sort through individual expense reports and issue payments only after an employee’s usage has been verified. It’s no surprise, then, that an Aberdeen Group study suggests each expense report costs $18 to process. Compounding those costs, companies opting for this method will issue hundreds or even thousands of payments each month, so the benefits that attend stipends can be quickly outweighed.

More recently, a few carriers have started to offer a split-billing solution. Split billing attempts to categorize employee usage as either personal or work-related and, in turn, solves some of the issues that expense reports present. For starters, companies could avoid the need to process individual expense reports, as employees’ bills would obviate the need. Unfortunately, though, these split-billing solutions are only partial solutions, as they typically do not account for the voice portion of an employee’s bill. An even larger concern, however, is that split-billing forces employees to align with one carrier, a concept that is at odds with the heart of BYOD: autonomy.

A less discussed but potentially more complete stipend solution is referred to as direct-to-carrier credits. In fact, Gartner has called this process the most effective method for managing BYOD expenses. Simply put, companies determine payment levels based on employee role or any other relevant factor, and then have the stipends applied directly to employees’ bills as a credit.

This solution is typically tied into software that encourages employees to comply with mobile policies and alerts the employer and BYOD solution provider when a device is out of compliance. Plus, by integrating with HR Information Systems, the solution alerts the vendor when an employee’s role or status has changed within the organization.

* Determining Eligibility. Regardless of the stipend approach used, companies must determine which employees are eligible to participate, and many base the decision on roles. For example, an organization may decide to exclude hourly employees from its stipend program. That doesn’t necessarily mean those employees can’t access the network; it simply means they bear the entire costs themselves. If utilizing direct-to-carrier credits, companies may place eligible employees into one of three or more categories. An employee who rarely needs to be contacted outside the office might receive a $35 stipend each month. A salesperson, on the other hand, might receive twice that amount due to the demands of the position. In any event, employees would be assigned a tier by managers and then enroll in the BYOD program over a web portal.

* Taking Control. The decision to reimburse employees for BYOD, at least in California, became clearer with the Cochran ruling. In other states, it may simply come down to control. That is, control over the devices accessing corporate information. For example, if MDM software is required to be downloaded prior to accessing the network, businesses can ensure their employees don’t download certain apps or visit certain sites that may jeopardize security.

Stipends offer a compelling incentive for end users. Employees get help paying their mobile bill (for work-related purposes, of course) and employers get some measure of control over the device itself due to the fact that stipends can be tied into the MDM software in such a way that if a device falls out of compliance the stipends are immediately suspended. Those safeguards are absent from reimbursements made via expense reports. And though stipends may be contingent upon compliance, if those stipends aren’t synced with the MDM software, it does little to prevent a breach or respond quickly to a noncompliant device.

* Limiting Taxes. The Internal Revenue Service (IRS), in Notice 2001-72, thankfully removed mobile devices from the “heightened substantiation requirements” they were subject to prior to 2010. The devices, to avoid tax consequences, have to be provided for substantial noncompensatory business reasons, such as an employee’s need to communicate with clients after normal work hours or the employer’s need to reach the employee during similar off hours.

Shortly thereafter, the IRS issued Interim Guidance on Reimbursement of Employee Personal Cell Phone Usage in light of Notice 2011-72, wherein it addressed reimbursements made to employees for the business use of employee-owned devices. In order for a stipend to avoid taxation based on additional wages or income, the memorandum states that, where employers, for the same substantial noncompensatory business reasons noted in Notice 2001-72, require employees to use their personal cell phones, the employee must “maintain the type of cell phone coverage that is reasonably related to the needs of the employer’s business, and the reimbursement must be reasonably calculated so as not to exceed expenses the employee actually incurred in maintaining the cell phone.”

A tiered approach to stipends that considers the differing needs and demands of various roles within an organization would seem to satisfy those requirements. Though not without shortcomings, split billing solutions clearly satisfy the requirements by separating usage on each bill.

While there is much that is unclear regarding the tax code, the fact that BYOD is growing in popularity every year is undisputed. And as more Millennials enter the workforce, that trend will likely not slow.

BYOD is about more than the wishes of tech-savvy employees; it’s about productivity and the bottom line. To maximize both, companies should strongly consider offering employees a stipend for the work-related use of their personal devices.

While options for paying stipends exist, organizations need to understand there are real differences between those options and, often, the success of a BYOD program depends on how those stipends are offered.

 

MCTS Training, MCITP Trainnig

Best Microsoft MCP Certification, Microsoft MCSE Training at certkingdom.com

<