Archive for the ‘ Tech ’ Category


Tech companies snag 20 spots on Glassdoor’s ranking of 25 highest paying companies in America

Tech companies dominate Glassdoor’s ranking of the highest paying companies in the U.S., snagging 20 of the top 25 spots. But no tech company ranks higher than Juniper Networks, which pays its workers a median total compensation of $157,000.

The next-highest ranking tech company is Google, which landed at No. 5 on Glassdoor’s list with a median total compensation of $153,750.

While tech companies earned the most spots on the list, consulting firms set the high bar for compensation in Glassdoor’s report, “25 Highest Paying Companies in America for 2016.” No. 1 on the list is A.T. Kearney, which pays a median total compensation of $167,534. Strategy&, at No. 2 on the list, pays a median total compensation of $160,000.

Juniper placed third among the 25 companies, while McKinsey & Company ranked fourth with a median total compensation of $155,000.

Glassdoor’s total compensation figures include base salary as well as other forms of pay, such as commissions, tips and bonuses. The data comes from U.S.-based employees who voluntarily shared their compensation on Glassdoor’s website during the past year. Companies considered for Glassdoor’s report must have received at least 50 salary reports by U.S-based employees during the 12-month time frame.

“Salaries are sky-high at consulting companies due to ‘barriers of entry’ in this field, which refers to employers wanting top consultants to have personal contacts, reputations and specialized skills and knowledge,” said Andrew Chamberlain, Glassdoor chief economist, in a statement. “In technology, we continue to see unprecedented salaries as the war for talent is still very active, largely due to the ongoing shortage of highly skilled workers needed.”

Here is Glassdoor’s full list of the 25 highest paying companies in the U.S.:

1. A.T. Kearney: median total compensation $167,534; median base salary $143,620
2. Strategy&: median total compensation $160,000; median base salary $147,000
3. Juniper Networks: median total compensation $157,000; median base salary $135,000
4. McKinsey & Company: median total compensation $155,000; median base salary $135,000
5. Google: median total compensation $153,750; median base salary $123,331
6. VMware: median total compensation $152,133; median base salary $130,000
7. Amazon Lab126: median total compensation $150,100; median base salary $138,700
8. Boston Consulting Group: median total compensation $150,020; median base salary $147,000
9. Guidewire: median total compensation $150,020; median base salary $135,000
10. Cadence Design Systems: median total compensation $150,010; median base salary $140,000
11. Visa: median total compensation $150,000; median base salary $130,000
12. Facebook: median total compensation $150,000; median base salary $127,406
13. Twitter: median total compensation $150,000; median base salary $133,000
14. Box: median total compensation$150,000 ; median base salary $130,000
15. Walmart eCommerce: median total compensation $149,000; median base salary$126,000
16. SAP: median total compensation $148,431; median base salary $120,000
17. Synopsys: median total compensation $148,000; median base salary $130,000
18. Altera: median total compensation $147,000; median base salary $134,000
19. LinkedIn: median total compensation $145,000; median base salary $120,000
20. Cloudera: median total compensation $145,000; median base salary $129,500
21. Salesforce: median total compensation $143,750; median base salary $120,000
22. Microsoft: median total compensation $141,000; median base salary $125,000
23. F5 Networks: median total compensation $140,200; median base salary $120,500
24. Adobe: median total compensation $140,000; median base salary $125,000
25. Broadcom: median total compensation $140,000; median base salary $130,000

Click here to view complete Q&A of 70-398 exam
Certkingdom Review

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-398 Training at certkingdom.com

 

Data science is one of the fastest growing careers today and there aren’t enough employees to meet the demand. As a result, boot camps are cropping up to help get workers up to speed quickly on the latest data skills.

Data Scientist is the best job in America, according to data from Glassdoor, which found that the role has a significant amount of job openings and that data scientists earn an average salary of more than $116,000. According to its data, the job of data scientist rated a 4.1 out of 5 for career opportunity and it earned a 4.7 for job satisfaction. But, as the role of data scientist grows in demand, traditional schools aren’t churning out qualified candidates fast enough to fill the open positions. There’s also no clear path for those who have been in the tech industry for years and want to take advantage lucrative job opportunity. Enter the boot camp, a trend that has quickly grown in popularity as a way to train workers for in-demand tech skills. Here are 10 data science boot camps designed to help you brush up on your data skills, with courses for anyone from beginners to experienced data scientists.

Bit Bootcamp

Located in New Jersey, Bit Bootcamp offers both part-time and full-time courses in data analytics that last four weeks. It has a rolling start date and courses cost between $1,500 – $6,500, according to data from Course Report. It’s a great option for students who already have a background in SQL, as well as object-oriented programming skills such as Java, C# or C++. Attendees can expect to work on real problems they might face in the workplace, whether it’s at a startup or a large corporation. The course completes with a Hadoop certification exam using the skills learned over the past four weeks.
Price: $1500 – $6500

NYC Data Science Academy
The NYC Data Science Academy offers 12-week courses in data science that offer a combination of “intensive lectures and real world project work,” according to Course Report. It’s aimed at more experienced data scientists, who have a masters or Ph.D. degree. Courses include training in R, Python, Hadoop, Github and SQL with a focus on real-world application. Participants will walk away with a portfolio of five projects to show to potential employers as well as a Capstone Project that spans the last two weeks of the course. The NYC Data Science Academy also helps students garner interest from recruiters and hiring managers through partnerships with businesses. In the last week of the course, students will participate in mock interviews and job search prep; many will also have the opportunity to interview with hiring tech companies in the New York and Tri-State area.
Price: $16,000

The Data Incubator
The Data Incubator is another program aimed at more experienced tech workers who have a masters or Ph.D., but it’s unique in that it offers fellowships, which means students who qualify can attend for free. Fellowships, which must be completed in person, are available in New York City, Washington D.C. and the Bay Area. The program also offers students mentorship directly from hiring companies, including LinkedIn, Microsoft and The New York Times, all while they work on building a portfolio to showcase their skills. The boot camp programs run for eight weeks and students need to have a background in engineering and science skills. Attendees can expect to leave this program with data skills that will be applicable in real world companies.
Price: Free for those accepted

Galvanize
Galvanize has six campuses located in Seattle; San Francisco, Denver, Fort Collins, Boulder, Colo.; Austin, Texas; and London. The focus of Galvanize is to develop entrepreneurs through a diverse community of students who include the likes of programmers, data scientists and Web developers. Galvanize boasts a 94 percent placement rate for its data science program since 2014 and students can apply for partial scholarships of up to $10,500. According to Galvanize, students have gone on to work for companies such as Twitter, Facebook, Air BnB, Tesla and Accenture. This boot camp is intended to combine real life skills with education so that graduates walk away ready to start a new career or advance at their current company through formal courses, workshops and events.
Price: $16,000

The Data Science Dojo
With campuses in Seattle, Silicon Valley, Barcelona, Toronto, Washington and Paris, the Data Science Dojo brings quick and affordable data science education to professionals around the world. It’s one of the shortest programs on this list — lasting only five days — and it covers data science and data engineering. Before you even attend the program, you will get access to online courses and tutorials to learn the basics of data science. Then, you’ll start the in-person program which consists of 10 hour days over the course of five days. Finally, after the boot camp is complete, you’ll be invited to exclusive events, tutorials and networking groups that will help you continue your education. Due to the short nature of the course, it’s tailored to those already in the industry who want to learn more about data science or brush up on the latest skills. However, unlike some of the other courses on this list, you don’t need a master’s degree Ph.D. to enroll, it’s aimed at anyone at any skill level who simply wants to throw themselves in the trenches of data science and become part of a global network of companies and students who have attended the same program.
Price: Free for those accepted

Metis
Metis has campuses in New York and San Francisco, where students can attend intensive in-person data science workshops. Programs take 12 weeks to complete and include on-site instruction, career coaching and job placement support to help students make the best of their newly acquired skills. Similar to other boot camps, Metis’ programs are project-based and focus on real-world skills that graduates can take with them to a career in data science. Those who complete the program can expect to walk away with in-depth knowledge of modern big data tools, access to an extensive network of professionals in the industry and ongoing career support.
Price: $14,000

Data Science for Social Good
This Chicago-based boot camp has specific goals; it focuses on churning out data scientists who want to work in fields such as education, health and energy to help make a difference in the world. Data Science for Social Good offers a three-month long fellowship program offered through the University of Chicago, and it allows students to work closely with both professors and professionals in the industry. Attendees are put into small teams alongside full-time mentors who help them through the course of the fellowship to develop projects and solve problems facing specific industries. The program lasts 14 weeks and students complete 12 projects in partnership with nonprofits and government agencies to help tackle problems currently facing those industries.
Price: Free for those accepted

Level
Offered through Northeastern University, Level is a two-month program that aims to turn you into a hirable data analyst. Each day of the course focuses on a real-world problem that a business will face and students develop projects to solve these issues. Students can expect to learn more about SQL, R, Excel, Tableau and PowerPoint and walk away with experience in preparing data, regression analysis, business intelligence, visualization and storytelling. You can choose between a full-time eight week course that meets five days a week, eight hours a day and a hybrid 20-week program that meets online and in-person one night a week.
Price: $7,995

Microsoft Research Data Science Summer School
The Microsoft Research Data Science Summer School — or DS3 — runs for eight weeks during the summer. It’s an intensive program that is intended for upper level undergraduates or graduating seniors to help grow diversity in the data science industry. Attendees get a $5,000 stipend as well as a laptop that they keep at the end of the program. Classes accommodate only eight people, however, so the process is selective, but it’s only open to students who already reside or can make their own accommodations in the New York City area.
Price: Free for those accepted

Silicon Valley Data Academy
The Silicon Valley Data Academy, or SVDA, hosts eight-week training programs in enterprise-level data science skills. Those who already have an extensive background in data science or engineering can apply to be a fellow and have the tuition waived. You can expect to learn more about data visualization, data mining, statistics, machine learning, natural language processing as well as tools such as Hadoop, Spark, Hive, Kafka and NoSQL. Programs consist of more traditional curriculums including homework, but it also includes guest lectures, field trips to headquarters of collaborating companies and projects that offer real world experience.
Price: Free for those accepted

 

Click here to view complete Q&A of 70-414 exam
Certkingdom Review

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-414 Training at certkingdom.com

Q&A: Mobile app security should not be an afterthought

Written by admin
February 13th, 2016

As enterprises struggle to keep up with their internal demand for mobile apps, more are turning to rapid development workflows. What does this mean for security?

As enterprises struggle to keep up with their internal demand for mobile apps, more are turning to more speedy development workflows, such as the Minimum Viable Product (MVP) , which essentially calls for mobile development teams to focus on the highest return on effort when compared to risk when choosing apps to develop, and features to build within them. That is: focus on apps and capabilities that users are actually going to use and skip those apps and features they won’t.

Sounds simple, but what does that mean when it comes to security? We know application security is one of the most important aspects of data security, but if software teams are moving more quickly than ever to push apps out, security and quality assurance needs to be along for the process.
MORE ON CSO:Mobile Security Survival Guide

The flip side is minimum apps and features could mean less attack surface. To get some answers on the state of mobile app security and securing the MVP, we reached out to Isaac Potoczny-Jones research lead, computer security with a computer security research and development firm Galois.

Potoczny-Jones has been a project lead with Galois since 2004, is an active open source developer in cryptography and programming languages. Isaac has led many successful security and identity management projects for government organizations including (Navy, DOD), (DHS), federated identity for the Open Science Grid (DOE), and mobile password-free authentication (DARPA), and authentication for anti- forgery in hardware devices (DARPA).
ted talk
Four mindblowing Ted Talks for techies

TED talks make that possible to do in a single sitting. Here are four talks that in just over an hour
Read Now

Please tell us a little about Galois and your role there in security.

Galois is a computer security research and development firm out here in Portland, Ore. We do a lot of work with the US federal government, been around since 1999 and I’ve been here for 11 years now. I think a lot about this topic, I really appreciate and employ myself the lean methodologies for product development, and I love the lean startup approach. I also do security analysis for companies, so I’ve gone into a number of start-ups too and looked at their security profile for their products or their infrastructure, and help them to develop a security program. I’ve definitely seen both sides of the issue as far as where MVP thinking leads you.

What are you seeing out within organizations today when it comes to mobile security?

There’s definitely a lot more development in mobile happening. The best practices in mobile aren’t as well developed as best practices for the web. That’s getting a little bit better.Consider HTTPS. What we saw for quite some time was something that on the Web is relatively straightforward, which HTTPS is. People were doing it wrong on mobile for years before anyone really noticed. There’s a lot you can get wrong with HTTPS, and they were getting it all wrong. As people move over to mobile they are definitely having to relearn some of the lessons we learned over the years.
“A lot of things are easy to add at the end with security, but sometimes you run into systems that are just kind of broken from the foundation. As with any of these things, the later you catch it, the costlier it’s going to be be.”

Password security is another one of those. People began to make passwords on websites a lot more robust. You can’t just have a four or five letter password anymore on most websites. But because mobile devices are so difficult to type password into, a lot of sites have relaxed those password rules. In reality, the threat is just the same as it always has been.

What impact do you see the minimum viable product, or minimum viable app, trend?

On the MVP front, there’s a very fascinating challenge with security because security is a non-functional requirement. I tend to like the lean scrum methodology. I don’t know if you’re familiar with that one, but I can use that one as an example. They’re all kind of similar in some ways. They emphasize features, they emphasize things the users can see. They emphasize testing out ideas, and getting them into the market. Testing them, gathering metrics about how effective they are, and using that as feedback into the product. That’s a really good idea about how to develop a product. But because even just the terminology, minimum viable product, it is really emphasizing minimizing.

It emphasizes getting rid of what you don’t need. Those things together, minimizing things and really having an emphasis on what the user can do and see, that makes it so that non-functional requirements are kind of an afterthought. You have to squint to figure out how to apply non-functional requirements like security to a lot of these processes like scrum.

I would imagine with an MVP teams want to move the app out as quickly as possible, so they don’t want to spend a lot of time threat modeling and going through a lot of additional process, because that’s all adding to more development time. So there seems to be a natural friction between the goals of MVP and good security.

It’s absolutely a friction. It’s challenging because securing is mostly invisible. That means good security and bad security look exactly the same, until something goes wrong. Security is really visible when something is broken or somebody gets hacked and then you make the news. Then it kind of blows up in your face. We’ve seen this a few times, I don’t know how many start-ups it’s killed, it’s probably killed a few, but it’s definitely cost a lot of start ups when their first major news coverage is that they were hacked.

What are some ways organizations can ease that tension when it exists? Is there a way to bring security in so it’s not too obtrusive? Is there a way to separate out apps by data type? And possibly greenlight MVP apps that don’t touch more sensitive data, and give a closer look at those apps that do?

I think that’s a good approach. As you point out, one way is to say, let’s see if we can do an MVP with data that’s not as sensitive so you won’t have to focus as strongly on security. Nowadays, it’s a little more challenging. Even the minimum things you do you will need security. It kind of doesn’t matter what your data is, you will get targeted, you will get attacked, and even if it’s just with these automated bots that run around the Internet attacking everything. They’ll use your infrastructure for sending spam at the very least, if that’s all they can do. To me, the approach is you have to implement some of the industry best practices as far as the OWASP Top 10. You have to believe that security is an important part of a minimum viable product to start to even begin to get these user stories in there.

What I like to tell people is think about user stories, even negative user stories or things like that are, as a user, I don’t want to see my personal information leaked on the internet because I’ve shared something sensitive in your app or your website, I’ve stored something sensitive in your website. I don’t want to see that in the hands of people who will use my private information against me.

That sounds like something a security team could put a guide together, or put in place a checkpoint on whether an app can go through. For instance, if the app has certain conditions that are true, or one of these conditions that are true, the app has to go through a security review. If not, it’s OK for a security light approach within certain guidelines.

That’d be perfect. Typically these lean approaches have at least some kind of testing methodology built in, or acceptance testing. Or, as some of them call say, “What’s your definition of ‘done’?” The first step is just saying, “We’re going to include security in these definitions of done,” and once you’ve at least penetrated that level, which I don’t think a lot of people have, but once they get that, then they’re going to at least do the right things. You’re either going to start to build it either into the user stories or the acceptance testing.

But you can’t leave it to just be at the end of the process. If you leave security acceptance testing toward the end, and naturally your schedule is going to slip. Then you’ll get to the security testing and find there’s a lot more work to do. Then you’ll be in this unfortunate decision of either having to fix things and let your schedule slip, or choose to let something go out the door that’s not secure.

The real tragedy is when a system is kind of inherently insecure, it was built in a really insecure way that requires major rework, because you didn’t think about security at the beginning. A lot of things are easy to add at the end with security, but sometimes you run into systems that are just kind of broken from the foundation. As with any of these things, the later you catch it, the costlier it’s going to be be.

If you’re looking at your to-do list, whatever that to-do list is, whether it’s a list of stories or a big list of tasks and action items, you should be recognizing some security issues in there, as you go. You’ll get to a point, you’re developing something and one of your developers hopefully will say, “Well, look, our system is vulnerable to whatever cross site request forgery, cross site scripting attack. Which any system that’s not designed to protect against it, is going to be.

If you look at your bug list, you should see that pop up there at some point. Some of these security issues will come up during development, because nothing will be perfect. That’ll be an early indicator.

If you don’t have anything, if you look at your bug list and you don’t see anything, if your developers aren’t actively talking about security or saying, “We’re going to have to add some tasks for security,” you’re going to say, “Well, I want to add that feature for you but that’s going to have an impact on security.” If you’re not hearing it as part of the conversation, then there’s going to be a problem.

Click here to view complete Q&A of MB2-702 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MB2-702 Training at certkingdom.com

IT spending tanked worldwide last year

Written by admin
January 19th, 2016

But the U.S. bucked the trend, as spending rose

Worldwide IT spending fell nearly 6% last year — the largest one-year decrease research firm Gartner says it has ever seen. The global forecast for 2016 is for an improving, but relatively flat, $3.54 trillion. That would be a 0.6% increase.

Gartner blames a strong U.S. dollar for the global decline, because it effectively increased the price of exports by as much as 20%. Political and economic instability in countries such as Russia and Brazil also contributed to the spending problems. By comparison, the U.S. saw an increase in IT spending.

In the U.S., IT spending increased 3.1% to $1.14 trillion. The U.S. forecast this year is for a 1.2% increase.

Globally, “we’re just in this anemic growth period,” John-David Lovelock, a research vice president at Gartner. The countries who saw the most problems with IT spending include Russia, Japan and Brazil.

The economic issues also changed how firms bought IT products and services, said Lovelock.

Instead of buying a product license for $1 million, for instance, users are switching to SaaS products for $100,000 a year. Cloud services have also replaced physical servers, he said.

Globally, there were declines in every area of IT spending, including software, devices and services. The only area to post growth was data center systems spending, largely thanks to cloud.

The IT area expected to see the largest gains this year is software; it is expected to rise 5.3% to $326 billion globally. CRM is the hot area, as users seek to integrate social media with the business needs.

 

Click here to view complete Q&A of MB2-706 exam

Certkingdom 20% Discount Promotion Coupon Code: 45K2D47FW4

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MB2-706 Training at certkingdom.com

New job realiities ahead for IT workers

Written by admin
January 16th, 2016

Next time, an economic downturn may be different for tech

The change in IT hiring was illustrated this week by General Electric Co., which announced it is moving its headquarters from Fairfield County, Conn., to Boston. In doing so, Jeff Immelt, GE’s CEO, said Greater Boston is home to 55 colleges and universities, and “attracts a diverse, technologically fluent workforce.”

Four months prior, GE announced formation of a new business, GE Digital, a $6 billion unit with a goal of becoming “a top 10 software company by 2020,” said Immelt at the announcement. To help staff up for this initiative, GE is hiring technology workers capable of new product development.

This isn’t happening just at GE. IT employment is broadly shifting away from infrastructure support, which is increasingly vulnerable to offshore outsourcing and migration to cloud services.

“GE is basically reinventing itself and trying to become the leading industrial software company in the world,” said Erik Dorr, vice president of research at management consulting firm Hackett Group.

For GE this means building platforms to support new technologies, such as Internet of Things-enabled products. “They recognize that all of this is predicated on having access to top talent,” said Dorr.

IT employment has, in the past, followed the economy. The Great Recession resulted in massive IT job layoffs, as companies cut back-office operations. But today’s shift to “digitization” of products — turning consumer wares into connected products, adapting to mobile and utilizing business intelligence, robotics and social media — have all increased demand for people with these skills.

This means that if the global stock sell-off and crashing oil prices result in new waves of layoffs, tech workers who develop new products, markets and digital experiences may be in the best position to survive.

Firms “are going to hire these people no matter what happens to the economy,” said David Foote, the CEO of Foote Associates, which researches the IT labor market. “If there is a downturn, they work even harder to keep the people they’ve got,” he said.

Technology jobs are now embedded throughout organizations, and many CIOs may not have the control over technology spending they once did. But they still are responsible for a sizeable part of IT spending.

Estimates of the number of new IT jobs added last year range from 125,000 to about 180,000, similar to what happened in 2014. This is based on an analysis of government labor data by labor market analysts.

In 2016, IT budgets “are still growing, but only at 2% at the median,” said Frank Scavo, the president of Computer Economics, a research firm. That’s down from 3% IT budget growth in 2015.

“We do not see layoffs on the horizon,” said Scavo, whose firm runs ongoing surveys of IT managers. “It’s not a hiring boom by any means, but tech staffing is still healthy,” he said. Only 7% of IT executives expect to see staff cuts in 2016, while 40% plan to hire more staff members, said Scavo.

But Victor Janulaitis, the CEO of Janco Associates, said IT hiring, which slowed in the last few months of last year, will be impacted by the financial market turmoil. “I think we’re seeing the first phase of a new downturn in the economy,” he said. He expects IT hiring to be flat this year.

For his part, Mark Roberts, the CEO of TechServe Alliance, which also tracks IT hiring, doesn’t see the recent softening in IT hiring as a sign of impending economic decline.

“IT employment has been growing at a very steady clip and still outperforms the overall workforce,” said Roberts. “At some point, the significantly elevated rate of growth is not sustainable,” he said.

There’s another factor that may have had a role in GE’s move to Boston: GE has been angry over Connecticut’s rising tax rates, creating a political storm.

The tax climate is more favorable in Massachusetts than Connecticut, says the Tax Foundation, an independent tax policy research organization. Massachusetts is ranked 25th nationally, versus Connecticut, near the bottom of tax favorability at 44. But the tax climate is even worse in California, which is ranked at 48, and that’s the state with the nation’s highest concentration of technology jobs.

Certkingdom 20% Discount Promotion Coupon Code: 45K2D47FW4

Click here to view complete Q&A of 70-246 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-246 Training at certkingdom.com

 

Inside AT&T’s grand dynamic network plan

Written by admin
January 7th, 2016

The service provider shares lessons learned from early adopters of its first Network on Demand service and outlines what comes next

AT&T is pouring billions into its network to make it more dynamic, which is resulting in new capabilities for enterprise customers. Network World Editor in Chief John Dix recently stopped by AT&T headquarters in Dallas to talk to Josh Goodell, VP of Network on Demand, about what the company is learning from early adopters of its Switched Ethernet on Demand service and what comes next. Among other things, Goodell explains how provisioning now takes days vs. weeks, service profiles can be changed in seconds, and how he expects large shops to use APIs to connect their network management systems directly to AT&T controls. Oh, and a slew of virtual functions are on the horizon that will enable you to ditch all those appliances you’ve been accumulating.

Let’s start with the big picture view of AT&T’s dynamic network efforts. What’s the goal?
Usually when I talk about our strategy I start at the network access layer. This is the physical infrastructure that AT&T has built over years – the fiber network and technologies like LTE on the wireless side, and what we call Lightspeed, which is a combination of fiber and copper. It’s a very robust network that has a tremendous reach and tremendous speed. All of that is foundational to what we’re doing now. Our Network on Demand platform acts like rapid onramps to that very fast network. So that physical layer is important and one area of the overall puzzle.

Another area is driven by John Donovan, Senior Executive Vice President of Technology and Operations, who is driving our software-centric architecture. We’ve called it different things over the last couple of years, including Domain 2.0, but at its core it’s about driving virtualization within our own network. He’s made the commitment that by 2020 we’re going to virtualize 75% percent of our network. That’s all about driving up utilization in the network and enabling scale and flexibility.

Where does SDN make sense? We ask Fidelity’s Director of Global Network Architecture

The third piece is enabling these same types of capabilities for our business customers, and that’s really where Network on Demand comes into play. It’s taking technologies that we’re utilizing internally and making our core strategic services better by utilizing the same technologies.

The Network on Demand platform initially launched with one capability — AT&T Switched Ethernet on Demand — and the second service that will launch is Managed Internet Service on Demand. Then we will continue to add additional services over time. So Network on Demand is creating this platform that enables customers to have a rapid onramp to that very robust network.

That gives customers more control of their network, the ability to rapidly scale up or scale down their network, and improves TCO, not just because you have the ability to use exactly what you want, but also because you can be more productive. You can spin up a location more rapidly than you could have in the past.

Then we will also start getting into services that take advantage of both SDN and NFV, where you’re actually virtualizing what has typically been purpose-built appliances. We don’t have a product in the market yet but we’ve announced that the first iteration will be available in the next few months.

We’ve reorganized our entire technology organization around network simplification and a software-centered network, and then exposing those capabilities to our customers. That’s the big picture and Network on Demand is one piece of that picture.

It’s important to understand that we have a lot of conviction across all three of those areas. From 2009 to 2014 we spent about $140 billion in those three areas. These aren’t hobbies. These are how we’re committed to drive a differentiated network experience.

How long has the switched Ethernet service been available?
We opened the first market in Austin Texas in November 2014, expanded to five markets in February of this year, and in April expanded to 170+ markets. So that is very, very fast for any service we’ve ever stood up. Part of it is the technology. It’s different. It’s building on a software layer that allows for a rapid product instantiation, but we also used an agile approach to development, a DevOps model, and the combination of those things allowed us to move at a rapid pace.

When does the Internet on Demand service go live?

Managed Internet on Demand is CI in Atlanta. Interestingly, with AT&T Switched Ethernet on Demand there’s no virtualization happening. It is an SDN layer on top of existing network infrastructure. Managed Internet on Demand is a different architecture that actually takes advantage of both SDN and NFV, so we’ll be virtualizing the customer edge. Typically the customer has a router on-premise that will be virtualized in the AT&T cloud, and then we will also be virtualizing the provider edge. It’s a big deal because I expect that over time we’ll be virtualizing a lot of different services, so we’re going through what it takes to do this with this next service for the first time ever.

The initial offering will be a virtualized router only. If they want they can buy a switch and put it at the end of their network and run something off of that. Eventually we’ll have a use case where we’ll actually deploy a piece of CPE on-premise they can use, but the initial use case is a pure virtualized router.

The first place to tackle SDN? In the WAN

Coming back to the Ethernet On Demand service, how many customers do you have?
It’s been really interesting to see the way that has played out. As of today it’s over 350 customer networks, about 1,000 locations. That’s a lot more than what we had expected. Market demand has been pretty strong.

Is there a typical customer profile emerging?
It has been across industries. The largest network we have provisioned is about 150 locations and as small as two-locations. So it’s run the gamut. It is more prevalent so far in the mid-market and down-market, but every single one of our segments has seen traction.

One interesting thing is how we’ve simplified the selling experience. We’ve enabled our sales people to use an iPad to order the service and do all of the contract work with the customer on their premise. Historically the presale cycle alone was days. Now everything can be done in one sit-down discussion. That’s not the cool, interesting technology that SDN represents, but it is an interesting case of how, when you take friction out of the experience on both the seller’s side and the customer’s side, you’re going to see traction and we’ve seen it with Ethernet.

How long does it take to deliver Switched Ethernet On Demand?
When fiber is available to the customer building, and if you take out the “Customer not ready” situations, it’s five days. The equivalent when you’re in a fiber location but not on Network on Demand is probably closer to four weeks. When you don’t have fiber availability the cycle time obviously goes up because you have the build process, but we’re still automating the overall process with Network on Demand.

How much are early customers actually changing the Switched Ethernet service profile once they are online?
There are a couple of things customers can do, one of which is to add locations. Customers can go into the portal, which knows the inventory of their locations, and go from, say, a two-location network to a three-location or a four-location network in a matter of days without ever having to talk to anyone. That’s a big deal. Historically that would have been multiple phone calls and a fairly long provisioning cycle. Customers can now do it themselves at their own convenience.

And another common use case is the ability for a customer to scale up or scale down their network. Any business that has seasonality is going to be interested in that use case. For example, we have K-12 as well as college institutions that are very interested in that capability.

We’ve also talked to a few hospitals that have branch locations that do analysis on medical images that are interested in the ability to scale up the network to send large payloads and then scale it down again, which isn’t something they could do before.

One of our large customers uses Network on Demand service for redundancy between data centers. They have this as a secondary network and keep it scaled all the way down, and in the event they have an issue they can scale it up within seconds.

Another interesting use case is around rapid provisioning for M&A activity. After a merger or acquisition the network may go from 10 locations to 20 locations literally within a week, and provisioning agility is important for those types of situations.

Does it surprise you that the mid- to smaller-size shops are the early adopters? I would think the largest shops would be dying for these capabilities?
It does surprise me a little bit. I think there are a couple of things at work. I mentioned that customers manage their networks through a portal. Some of our very large customers will want to have their own network management tools tap directly into our network through APIs. We expect to do that. We just haven’t gotten there yet. In fact, I expect at some point we will federate our SDN Layer 2 network with other carriers that have SDN Layer 2 networks. It’s a very natural evolution. And federating the network with a large customer’s network management tools is just another version of that. It’s more of a northbound API as opposed to an east-west interface.

I also think we have a service now that hunts very effectively against competitors that have been attacking us down market.

Going back to bandwidth flexibility, obviously the seasonal use cases make sense, but are other customers tweaking the settings more or less often than you would have expected?

Less often than I expected. When we started we limited the number of changes allowed to one per day because we had no idea what the actual behavior would be. What we’ve seen is customers aren’t going in and ratcheting it up and down frequently during the month. They may do it once or twice, but it’s not as prevalent as I would have expected. It’s still early, though. Since we’ve really been at scale only since April, it’s a bit early to say how much the behavior is going to shift.

What are the increments you can scale up and down?
It goes all the way from 2mbps to 10Gbps with several increments along the way.

You said you’re in 170 markets. How many states is that?
Our incumbent AT&T 21-state footprint. We do have fiber assets in other areas, including New York, Philadelphia and Boston. Those markets are not available yet on Network on Demand, but we expect that we will bring them on net in the future. For now the Ethernet on Demand service is limited to our 21-state footprint and the 170+ markets.

Internet on Demand is up next. What comes after that?
The next capability after Managed Internet is what we call Network Functions on Demand. Network Functions on Demand is basically re-looking at how premise-based appliances are used. Typically today you have purpose-built appliances, whether that’s a router or a firewall, a WAN accelerator, you name it. In the future we will offer what we call universal CPE that can run multiple virtual functions, so software instances of those capabilities on that universal CPE platform.

We are also building the ability to deliver those same types of virtual functions directly through our AT&T cloud. So you can envision a time where customers will have a series of capabilities that are delivered both through a universal CPE on their premise — again, things like router functionality, firewalls and WAN accelerators — as well as capabilities delivered directly through our cloud.

There would be advantages to using one versus the other. For example, an application that is going to be shared across multiple locations, you probably want to use more of a cloud approach, whereas an application that’s more specific to a location will sit on the universal CPE.

That capability will begin to roll out in the next few months. It will evolve dramatically over time. The first instantiation of it will be that universal CPE capability with a virtual router. Then we will add other virtual instances within the portfolio over time, and I expect it will evolve pretty dramatically throughout 2016.

I presume the universal CPE is a server that you manage?
Yeah, it’s a white box x86 server modeled to run three to four virtual functions. It’s got a Gig of throughput, so it’s a fairly robust platform. The box will be managed by AT&T. The virtual router will be managed by AT&T. But I expect over time you’re going to have multiple virtual functions on this box and we will have options for both AT&T managed as well as customer managed functions.

Will the virtual functions be available from different suppliers?
Yes, it’s an open platform and an ecosystem of partners. We’ve announced some partners and the ecosystem will expand over time. We’ve announced Juniper, Cisco, Brocade.

The appeal to the customer is fewer appliances to manage?
There are different value propositions with the universal CPE concept. One is you go from having multiple boxes to having one box. That’s a big deal. Just from power consumption and having less to worry about, that’s a big deal.

The other thing that is important is, because the functionality is being delivered through software that can be downloaded at any time, the issue around box obsolescence is less of a problem over time. And the installation cycle time agility plays out here as well. Historically, if we were to install multiple boxes on a customer premise, that typically happens sequentially and it may take 30 days for the first one and upwards of 90-120 days all told. In the future this is a plug-and-play model.

Does NetBond fit into this picture?
As we talk about the AT&T SDN story, NetBond is an element of that story. NetBond is basically secure connectivity to third-party clouds. Today if a customer wants to take advantage of NetBond and AT&T’s Switched Ethernet on Demand, they can. In the future I expect they will become more and more integrated and it will just be an extension of an overall on-demand experience. They both can be used today but they’re not fully integrated through one management pane of glass.

Click here to view complete Q&A of 70-246 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-246 Training at certkingdom.com

Cryptographic key reuse is rampart in European payment terminals, allowing attackers to compromise them en masse

Some payment terminals can be hijacked to commit mass fraud against customers and merchants, researchers have found.

The terminals, used predominantly in Germany but also elsewhere in Europe, were designed without following best security principles, leaving them vulnerable to a number of attacks.

Researchers from Berlin-based Security Research Labs (SRLabs) investigated the security of payment terminals in Germany and were able to use them to steal payment card details and PIN numbers, hijack transactions and compromise merchant accounts. They plan to present their findings at the 32nd Chaos Communication Congress (32C3) later this month.

INSIDER: 5 ways to prepare for Internet of Things security threats

According to Karsten Nohl, the founder and chief scientist of SRLabs, most terminals in Germany use two communication protocols, ZVT and Poseidon, to talk with cash registers and payment processing providers respectively.

Both of these protocols have features that can be abused by hackers, but the problem is further exacerbated by poor design decisions by payment terminal manufacturers, like the reuse of cryptographic keys across all devices.

The ZVT protocol is used by around 80 percent of payment terminals in Germany to communicate with cashier workstations, SRLabs estimates. It was originally designed for serial connections, but it’s now used mostly on TCP/IP networks. This means that on local networks attackers can use techniques such as ARP spoofing to position themselves between terminals and cashier stations in order to intercept and send ZVT commands.

Some of the ZVT traffic is unencrypted, according to SRLabs. For example, a man-in-the-middle attacker can use the protocol without authentication to read the information stored on the magnetic stripes of payment cards inserted into payment terminals.

The protocol also has a mechanism that allows requesting and obtaining a card’s PIN number as well, but such requests need to be signed with a message authentication code (MAC). The MAC is verified using a key that’s typically stored inside the payment terminal’s hardware security module (HSM), a special component designed for secure key storage and cryptographic operations.

The problem is that most terminals, regardless of manufacturer, share the same signature key, violating a basic principle of security design, Nohl said.

The HSM in some terminal models is vulnerable to so-called timing side channel attacks that can be used to extract the key within minutes after gaining access to the terminal through a JTAG debugging connection or a remote code execution flaw, he said.

Attackers can easily find and buy such vulnerable terminals on eBay. Once they extract the key from it, they can use it against most other devices, including newer models, because of the pervasive key reuse among payment terminal manufacturers in Germany.

Terminals used in other countries, especially in Europe, use a different communications protocol called OPI (Open Payment Initiative) that is similar to ZVT, but lacks the remote management functionality that attackers can abuse.

However, some terminal manufacturers added proprietary extensions to OPI to implement that functionality, because they like the comfort of remote management, Nohl said. “At least we’ve seen this in a few cases. We can’t guarantee that it’s widespread, but every implementation of OPI that we’ve looked at had extensions that brought back remote manageability, and like in ZVT, it wasn’t secure.”

With magnetic stripe data and associated PIN numbers attackers can clone payment cards and perform fraud, even in countries where chip-protected (EMV) cards are widely deployed.

EMV-capable terminals still support magstripe-based transactions for cards that don’t have a chip, and verifying whether the card has a chip or not is usually done by checking a specific bit stored on the magnetic stripe. So an attacker can simply change that bit on his cloned card, Nohl said.

Another attack that the SRLabs team found possible through ZVT is to force a terminal to associate with a different merchant account, like one controlled by a hacker, and which would receive all the money from transactions performed through that terminal.

This can be done by a man-in-the-middle attack through a password-protected command that instructs the terminal to change its ID to one that the payment processor associates with a different merchant. The password is the same for all terminals tied to a specific processor, the SRLabs researchers found.

When the terminal ID changes, the processor will send a new configuration back to the terminal including the new merchant’s transaction limits and banner — the merchant identifying information that appears on the printed receipts. The attacker can actually intercept this information and change it so that receipts retain the old merchant’s banner, while the money is funneled to the different account controlled by the attacker.

A third attack is possible through the Poseidon protocol that’s also widely used in Germany and in some other countries like France, Luxembourg and Iceland. This protocol is used by terminals to communicate with the backend servers of payment processors and is a variation of an international standard called ISO 8583.

Payment terminals require a secret key to authenticate with payment processors over the Poseidon protocol. However, like with ZVT, payment terminal manufacturers implemented the same authentication key across all of their terminals, SRLabs found.

This error can be abused to steal money from merchant accounts. While most transactions add money to such accounts in exchange for goods or services, there are a few that can cost merchants money, for example transaction refunds or top-up vouchers like those used to recharge prepaid SIM cards.

In the worst case scenario, attackers could hijack terminals and use them to issue refunds to bank accounts under their control from thousands of merchants by simply iterating through terminal IDs, which are usually assigned incrementally.

Nohl said that SRLabs performed a demonstration of the attacks for payment terminal manufacturers. Their response was that they haven’t seen this type of fraud outside of a laboratory setting, but that they’re working to address the issue, he said.

The people who implemented these protocols, which were developed independently from each other, didn’t understand how to do proper key management in both cases, Nohl said.

Fortunately, there is functionality in them that allows older keys to be replaced with new ones and which could be used to provide every terminal with its own unique key, as long as the backend servers are also modified to support such a deployment, the researcher said.

The terminals would still be vulnerable to remote code execution or timing side channel attacks, but at least extracting a key would restrict the abuse to a single terminal, not hundreds of thousands.

In the short term, it’s paramount to change existing keys with unique ones for every terminal, but in the longer term better standards should be designed that rely less on the security of the terminals themselves. This could be done by implementing things like public-key cryptography instead of symmetric-key algorithms, Nohl said.

Click here to view complete Q&A of MB2-706 exam

Certkingdom 20% Discount Promotion Coupon Code: 45K2D47FW4

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MB2-706 Training at certkingdom.com

All the facts worth knowing about IT leaders’ tech budgets, spending plans, hiring priorities and strategic initiatives for 2016.

Ready, set, disrupt!

If an overarching conclusion can be drawn from the results of Computerworld’s Forecast survey of 182 IT professionals, it’s that 2016 is shaping up to be the year of IT as a change agent.

IT is poised to move fully to the center of the business in 2016, as digital transformation becomes a top strategic priority. CIOs and their tech organizations are well positioned to drive that change, thanks to IT budget growth, head count increases and a pronounced shift toward strategic spending.

Amid the breakneck pace of change in technology and business alike, where should you direct your focus in the new year?

Read on for key highlights and data points on budgeting, hiring, business priorities and disruptive technologies that promise to define the IT landscape in 2016.
Computerworld Tech Forecast 2016: Tech Spending Continues to Rise

IT budgets on the rise…again
As companies continue to rely upon technology to help differentiate themselves in the marketplace, tech budgets remain on an upward trajectory.

Almost one half (46%) of respondents to the Forecast 2016 survey indicated that their technology spending will increase in 2016, on average by 14.7%. (By comparison, last year 43% said spending would increase, on average by 13.1%.)

Close to an equal number (42%) reported that their technology spending will remain the same, with only 12% anticipating a decrease in IT budgets.
Computerworld Tech Forecast 2016: Budget Booms and Busts

Security, cloud computing are top areas for investing
With security concerns top-of-mind for IT professionals as they gear up for 2016, it’s no surprise that exactly half of respondents chose security as the top area where their companies plan to increase spending.

Cloud computing came in a close second, and the top area where organizations plan to decrease spending is on-premises software — both of which indicate that companies’ journey to the cloud will continue in 2016.

IoT tops new areas of spending for 2016
After several years of languishing in the tech hype cycle, the Internet of Things finally looks to be commanding tech execs’ attention, with 29% of respondents identifying it as a new area of spending for 2016.

Green IT, which likewise had been back-burnered at many organizations, popped up on respondents’ radars as well, with 16% saying energy-saving technologies will be a new spend for them in the year ahead.

IT pros’ No. 1 challenge: Budgeting
As they do every year, budget constraints top the list of leadership challenges identified by survey respondents.

Security came in second among IT pros’ concerns after a year of ever bigger and more serious corporate hacks.

Sam Redden, chief security officer at Brazos Higher Education Service, a Waco, Texas-based student loan servicing company, sums up the feelings of many IT leaders when he says, “I wouldn’t be foolish enough to say I stay ahead of the bad guys. The bad guys stay ahead of everybody.”

Dueling goals for IT in 2016
Survey respondents’ goals for their most important tech projects betray the bimodal nature of the modern IT department.

Tech leaders say they’re striving to maintain or improve service levels, long one of IT’s core responsibilities. At the same time, they’re seeking to generate new revenue streams or increase existing ones, a new responsibility in most evolving technology departments.

“As technology becomes an integral part of every aspect of business and the way we interact with customers, it’s raising the profile of the IT group and forcing IT to think about more than just keeping the lights on,” says David Cearley, a fellow at Gartner. “We are seeing greater alignment as IT steps up to drive digital business.”

A piecemeal journey to the cloud
Heading into 2016, cloud computing shows no signs of slowing down, as tech leaders indicate that spending and new cloud initiatives remain on the upswing.

In terms of where organizations are in their cloud transition, 29% of survey respondents confirmed they had already moved some enterprise applications to the cloud, with more to come, while 7% said they’re in the process of migrating mission-critical systems to a cloud environment.

Interestingly, a full 20% of respondents are bucking the trend entirely, reporting they’re not moving to the cloud at all.

IT staffs to increase in 2016
As budgets rise and projects abound, many firms are looking to increase IT head count. Some 37% of survey respondents said they’re planning to increase staff levels, up from 24% last year.

In keeping with IT’s new role as an organizational agent of change, 42% of survey respondents with hiring plans are in search of people with combined tech and business backgrounds that will allow them to articulate the value of IT in meeting business goals.

Architecture, app dev among most wanted skills
The list of most in-demand IT skills starts off with a surprise. Although IT architecture is a fundamental area of expertise for techies at all levels and in various roles, it rarely makes anyone’s list of hot skills.

The term “IT architect” encompasses a wide range of specialists, from enterprise architects to cloud architects, so recruiters say it makes sense that IT architecture expertise is in demand as companies move forward with all sorts of technology-driven projects.

Beyond that, application development, project management, big data, BI, help desk and cloud all remain high on hiring managers’ lists as IT gears up for the year ahead.

(Download and save or print a free PDF of Computerworld’s top tech skills for 2016.)

John Reed, senior executive director of IT staffing firm Robert Half Technology, says those hiring managers could be facing a challenge. “The IT market has been really strong, and we’re expecting it will stay that way for the foreseeable future,” he says. “I don’t think you’ll see explosive growth, but you’ll see single-digit growth in demand, consistent with what we’ve seen over the past few years.”

Security, BI talent expected to be scarce
With all eyes on security in the coming year, it’s little surprise that survey respondents expect to have a difficult time hiring technolgists with that expertise.

According to Robert Half Technology’s 2016 Salary Guide, salaries in the security field will rise about 5% to 7% next year, ranging from $100,000 on up to nearly $200,000 on average.

Disruptive technologies 3 – 5 years out
When asked what technologies are likely to have an impact in the next three to five years, survey respondents chose cloud computing/software-as-a-service by a wide margin, followed by self-service IT, predictive analytics, the Internet of Things and unified communications.

The cloud will continue to reshape enterprise IT, according to research firm IDC, which predicts that more than half of enterprise IT infrastructure and software investments will be cloud-based by 2018. Specifically, spending on public cloud services will grow to more than $127 billion by 2018, according to an IDC forecast report.

Kicking the tires on new technologies
All manner of virtualization and “as-a-service” options topped survey respondents’ lists of technologies being piloted or beta tested at their organizations, with BI/analytics, cloud computing and mobile/wireless rounding out the top five.

“Virtualization 2.0” is of particular interest to survey respondents, as companies move beyond the first steps of server virtualization to explore virtualized desktop, storage, mobile and network options.

2016 is IoT’s year to shine
In 2016, the Internet of Things (IoT) will no longer be the stuff of science fiction, but rather a near-future reality for IT organizations across many industries, observers say.

In Computerworld’s Forecast 2016 survey, 29% of the respondents identified IoT initiatives — and related machine-to-machine and telematics projects — as new areas of spending for the year ahead. In comparison, just 12% of those polled last year said IoT work would be a new IT expenditure in 2015.

Likewise, the percentage of respondents who said they planned to launch IoT projects over the next 12 months rose from 15% last year to 21% this year. Additionally, 14% of this year’s respondents said they plan to beta-test IoT technologies, up from 7% last year.

Wearables in the enterprise? Not so much
While consumer-oriented wearable devices like Google Glass and the Apple Watch launched to great fanfare, the reality is that enterprises aren’t ready to make practical use of wearable systems, at least for the foreseeable future.

Wearable technology was last on the Forecast 2016 list of systems currently being assessed in beta tests and pilot projects, with only 4% of respondents saying they had projects underway involving wearables.

Furthermore, 78% said they were not currently working on wearable apps or anticipating the need to support wearables in the near future. And only 8% of those polled said wearables would play a role in their business or technology operations, while just 12% indicated that they were adjusting their mobile device management strategies to include wearables.

Click here to view complete Q&A of MB2-706 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MB2-706 Training at certkingdom.com

Have you implemented policies to ensure your business is risk-ready?

Data breaches are serious and very real threats in today’s digital world, and no industry sectors are immune. In the medical sector alone, the cost of client data breach liability, expense, and settlements surpassed the same costs from medical malpractice. Securing data and minimizing the probability and impact of data breaches is at its core a risk-based endeavor.

While many businesses have recognized the need for risk assessment and management, there is still a tendency to treat risk assessment and managements as “checkbox” exercises. For a risk management program to provide true benefit, several things are required:

An enterprise-level risk management practice. This is NOT your IT risk management team – it is a standalone and empowered practice that operates at the CXO level. This team is focused on business alignment.
An IT-level risk management practice. This team is focused on the application and testing of applicable risk management frameworks and the controls associated with those frameworks.
Certified and qualified risk management professionals. There are several industry certifications available. CRISC (Certified in Risk & Information Systems Control) and CRMP (Certified Risk Management Professional) are examples. They both require hefty amounts of continuing education, which is critical, given the moving target that cybersecurity has become.

Too often we see businesses with some partial combination of these elements, but we rarely see them address the complete picture.
4 Ways to Approach Risk

Risk assessment doesn’t need to be an enigma. Once risks are identified, they can only be dealt with one of four ways, with the selection for each risk factor to be determined with a business-alignment mindset:

Accept the risk. This is appropriate for risk factors with low probability and low impact.
Avoid the risk. Patient: “Doctor, my arm hurts when I do this!” Doctor: “Well then, don’t do that!” In all seriousness, this means that the organization shouldn’t engage in business activities not aligned with their primary mission or outside their area of primary expertise. This is appropriate for risk factors with high probability and high impact.
Transfer the risk. This is appropriate for risk factors with low probability but high risk. Examples are insurance policies and outsourcing of high-capital expense or high-expertise elements such as data center services. (Disclosure: I work for Lifeline, a provider of data center facilities and services.)
Mitigate the risk. This approach is appropriate when the high probability but relatively low risk. Additionally, if you happen to be a service provider that other organizations transfer risk to (like a data center provider) you are the last stop for risk, and you must find ways to mitigate it.

Obviously, the parsing of risk factors into their appropriate action buckets is a complex process requiring knowledge of the threats themselves, the technology involved, business alignment, vendor capabilities, actuarial data, etc.

Clearly, the ones that avoid it or accept aren’t setting themselves up for success. Being proactive instead of reactive is key to ensuring you cover as many vulnerabilities as possible.

On the other hand, many businesses realize they don’t have the staff, objectivity, time, or the money to allocate to risk management. These can be barriers to success, along with the other ego factors, including politics, turf wars, and ambition. Therefore, the most popular option out of these four is transferring that risk onto someone else, which effectively takes care of option number four: mitigating risk altogether.

The biggest benefit of this option is that hiring outside help can be the most cost-effective option, given that the cost of attracting certified risk management professionals and getting certifications for your business could be upwards of $1 million. And it takes time and resources, which translates into overhead costs. When in doubt, I always recommend transferring the responsibility to mitigate risk more effectively.
Implementing Risk Management

Before you can develop a risk management practice that makes sense, you need to assess where you currently stand. Instead of trying to assess the situation yourself, it’s important that you hire a third party to complete a risk assessment of your business that spares no detail. Thoroughness is an advantage; the more you know, the more you can mitigate risk.

The next decision you need to make is whether or not you want to eat the cost and handle it internally, or if you want to transfer that risk to an outsourced party.

Finally, regardless of whether you keep it in-house or transfer your risk, you do need to dedicate resources to your risk management practice so you can mitigate vulnerabilities as much as possible.

The consequences of not understanding and addressing your risks can be dire – from not being able to attract quality talent to destroying your reputation and credibility to going out of business.

Are you risk-ready?

 

Click here to view complete Q&A of 70-354 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-354 Training at certkingdom.com

 

From unstructured data mining to visual microphones, academic labs are bringing future breakthrough possibilities to light

If you take a look at the list of trending repositories on GitHub, you’ll see amazing code from programmers who live around the world and efforts for firms big and small. But one thing you don’t often see is work that comes from the university labs. It’s rare for the next big thing to escape from an academic computer science department and capture the attention of the world.

That’s not a knock on university research. But competing with open source projects that enjoy broad support across the industry and around the world is challenging for a handful of academics and grad students. Sure, many of the top computer science schools are well off, but that doesn’t mean the money is pouring into research. Open source programmers, on the other hand, can usually build better code faster, often because their have bosses who pay them to build something that will pay off next quarter, not next century.

Yet good computer science departments still manage to punch above — sometimes well above — their weight. While a good part of the research is devoted to arcane topics like the philosophical limits of computation, some of it can be tremendously useful for the world at large.

What follows are nine projects currently under development at university labs that are worth your attention. They may not be the absolute best or furthest along, but each has the potential to have a broad impact on the world of computing. Some offer shipping code, others offer mostly potential, but all offer a straightforward path for transforming our world with useful computation.

DeepDive

Big data is one area where academia’s focus on mathematical foundations can pay off, and one of the more prominent packages to gain attention of late is DeepDive, a tool for exploring unstructured text. While many big data projects work with well-structured information that’s already in tables, DeepDive focuses on finding correlations in raw text files and other files that aren’t organized.

The Java code runs a pipeline that pushes the raw data through a set of tools that parses natural language into streams of entities — that is, people, places, companies, or things. Then it uses statistical algorithms to search for connections among the entities, even if they’re not explicitly spelled out. These results are then boiled down to clear inferences and inserted into an old-school database.

The results vary depending upon the style of the text, the nature of the query, and the clarity of the writing, but in good circumstances the tool can deliver better results than humans can. The developers even report that some studies have shown that DeepDive “exceeded the quality of human volunteer annotators in both precision and recall for complex scientific articles.”

ZeroCoin
Bitcoin may be many things, but it is not as anonymous as many assume. The system tracks all transactions, so it’s possible to trace a single coin from the date it was born, through every owner, to its current one. ZeroCoin wants to change that. The proposed system will establish a parallel world where coins will enter and leave, erasing the trail. It promises privacy and security in one.

The system establishes a new temporary currency called a ZeroCoin that’s kept in a big, anonymous pool that doesn’t track ownership or provenance. The true owner can spend the coin by creating a zero-knowledge proof that establishes their rightful control without revealing their identity. The coin is then removed from the anonymous pool and converted back into a regular bitcoin.

“Our goal is to build a cryptocurrency where your neighbors, friends, and enemies can’t see what you bought or for how much,” ZeroCoin’s developers say.

Burlap lets you define the problem as a network of nodes with vectors of features or attributes attached to it. The algorithms can search through the network using a combination of brute-force searching and statistically guided exploration. The higher level of the algorithm plans the search and deploys the best algorithms. The toolkit includes dozens of the most useful algorithms for agent-based search.

The tool is useful for data-driven worlds where the data can be mapped into a large collection of nodes or objects. The code is written in Java and includes a large assortment of debugging and profiling tools that are useful for keeping the code moving toward the optimal goal.

SpiroSmart
The smartphones may let us talk, text, and even watch cat videos, but their greatest contribution to society may be as mobile doctors, ready to track our health, day in and day out. Among the hundreds of new apps for tracking our bodies is SpiroSmart, a software program that analyzes our lungs by listening to us breathe and measuring the echoes and reverberations.

The traditional medical test called a spirometer requires people to breathe through a tiny windmill that measures the intensity. Using a microphone reduces the danger of contamination and makes it possible for people to test their breathing discretely throughout the day.

The project is one part of a collection of tools analyzing lung health. Another tool, CoughSense, will record the number and severity of “cough episodes” during a day. It replaces specialized equipment or paper logs. Another approach, WiiBreathe, watches the distortion of Wi-Fi signals in the 2.4GHz range as they pass through the body and the lungs. It can track breathing within “the accuracy of 1.54 breaths per minute when compared to a clinical respiratory chest band.” All promise to reduce the need for specialized hardware, making testing simpler and more effective for all users.

Halide
As digital photography becomes more common, it’s only natural that people will want to do more to their images than merely look at them. Some want to filter the colors, others want to edit the images, and still more want to use the images as input to some algorithm, perhaps for steering an autonomous car.

All of these algorithms require loops — lots and lots of nested loops churning through the rows and columns of pixels. It turns out that being careful with the design of your algorithm by paying attention to the caching of data when structuring these loops can make a big difference in speed. If you want to convert your algorithm to run on a GPU, you’ll need to rethink all of these algorithms again.

Halide is a computer language for image processing designed to abstract away these decisions for you. It will worry about the loops and GPU conversions for you. If you write the instructions for analyzing a single pixel, it will produce fast code for churning through the entire image.

Visual Microphone
Cameras have traditionally been used to take static photos of things to save for the future. The things might be moving when the shutter snaps, but after that, they’re frozen for eternity like people on a Grecian urn. They do what your eyes do by capturing light forever.

Now that superfast cameras can capture hundreds or thousands of images per second, researchers are discovering that the cameras can do more than imitate the eyes. They can also do what our ears and skin can do by sensing sound or vibration using light alone.

The Visual Microphone project uses a series of images to detect small movements in an object. In the demonstration video, Visual Microphone watches for tiny movements that a crinkly potato chip bag creates when sound hits the bag. The vibrations may be very slight, but they’re enough for the software to recover a reasonable approximation of the sound.

The team is applying the same general idea to other problems like determining whether a building or a bridge is stable and safe. They can use a sequence of images from a windy day to look for small or not so small changes in the building. Dangerous resonant vibrations may not be large enough to be seen by a human or even felt, but the camera can flag them.

The idea is simple enough to spawn a number of other sensors. Cameras can take our pulses by tracking the flow of blood through the subtle blushing of the skin. Video rib monitors can count the breaths of an infant by watching the expansion of the chest. In these cases, the camera is not only more efficient, but safer because it doesn’t make contact and works from a distance.

Drake
Robots and drones are becoming more and more common in the enterprise as they move from the labs and take on crucial roles. Controlling these machines requires a good grasp of the laws of physics. Drake is a collection of packages that makes it a bit easier to write the code controlling these machines.

The code delivers a number of basic and not-so-basic models for predicting how your robot will move. You can begin rigid body models, layer in aerodynamic results, and feed it all into a dynamic control algorithm. There’s also a complement of visualization tools to debug your code and watch how it behaves.

Institution: Massachusetts Institute of Technology
GitHub: https://github.com/RobotLocomotion/drake/wiki

R

Anyone who’s spent time with big data or data scientists knows that they rely, more often than not, on a language called R to chew through the numbers and deliver the kind of statistical insights that make managers happy. Whether it’s marketing, risk management, scheduling, or any of host of other jobs for keeping an enterprise running, R is tuned for the statistical analysis that prove or disprove a hypothesis.

Popular
030215 fcc net neutrality tom wheeler
As AT&T falls behind T-Mobile in streaming, an executive blames net neutrality
raspberry pi zero
The $5 Raspberry Pi Zero is one elusive stocking stuffer
free tech software storage
19 free cloud storage options

Education
Now, saving the best for last, is the one thing that universities do better than anyone: teach. All of these projects are nice, but many schools are also open-sourcing and sharing their courses. They’re sharing the course materials, streaming video lectures, and even organizing the kind of study groups and grading sessions that turn a lecture or a book into a full course.

There are dozens of good courses, so it’s possible to knit together a complete degree for free (or a low cost). These two GitHub repositories are pointers to a few of the real courses out there. Drink deeply because you won’t be limited by, say, tuition.

Click here to view complete Q&A of 70-354 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-354 Training at certkingdom.com

 

Outside Building 99 in Microsoft’s Redmond, Washington, campus. Credit: Microsoft
Sysadmins can now turn on the feature in System Center Endpoint Protection and Forefront Endpoint Protection

It’s time to throw adware, browser hijackers and other potentially unwanted applications (PUAs) off corporate networks, Microsoft has decided. The company has started offering PUA protection in its anti-malware products for enterprise customers.

The new feature is available in Microsoft’s System Center Endpoint Protection (SCEP) and Forefront Endpoint Protection (FEP) as an option that can be turned on by system administrators.

PUA signatures are included in the anti-malware definition updates and cloud protection, so no additional configuration is needed.

Potentially unwanted applications are those programs that, once installed, also deploy other programs without users’ knowledge, inject advertisements into Web traffic locally, hijack browser search settings, or solicit payment for various services based on false claims.

“These applications can increase the risk of your network being infected with malware, cause malware infections to be harder to identify among the noise, and can waste helpdesk, IT, and user time cleaning up the applications,” researchers from the Microsoft Malware Protection Center said in a blog post.

System administrators can deploy PUA protection for the specific anti-malware product version in their organization through the registry as a Group Policy setting.

Microsoft recommends that this feature be deployed after creating a corporate policy that explains what potentially unwanted applications are and prohibits their installation. Employees should also be informed in advance that this protection will be enabled to reduce the potential number of calls to the IT helpdesk when certain applications that worked before start being blocked.

If the network is already likely to have many PUA installations, it’s recommended to deploy the protection in stages to limited number of computers in order to see if any detections are false positives and to add exclusions for them. Exclusion mechanisms based on file name, folder, extension and process are supported, the Microsoft researchers said.

 

Click here to view complete Q&A of 70-336 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-336 Training at certkingdom.com

Microsoft risks IT ire with Windows 10 update push

Written by admin
November 8th, 2015

Its OS-as-a-service could create headaches for shops used to a slower upgrade pace

Microsoft has made it clear that it will take on a greater role in managing the Windows update process with Windows 10. The company has also made it clear that it will aggressively push users — both consumers and businesses — to upgrade from Windows 7 and Windows 8 to its latest OS. With that in mind, it’s hard to image either predecessor hanging around anywhere near as long as Windows XP.

The decision to not only push updates out, but also ensure that all Windows 10 devices receive them in a timely fashion, fits well with the concept of Windows as a service. The change may even go unnoticed by many consumers. IT departments, however, are keenly aware of this shift — and many aren’t happy about it.

Managing Windows updates — old vs. new

Traditionally, Microsoft has given IT the final word on patches and updates. While most departments do roll out critical patches and major updates, they do so on their own time frame and only after significant testing in their specific environment. This ensures that an update doesn’t break an app, a PC configuration or cause other unforeseen issues. If an update is required that could introduce problems, IT can then develop a plan to address the issue in advance of deployment. Some updates might even be judged as unneeded and never get deployed.

With Windows 10, Microsoft is adopting a service-and-update strategy based on a series of tracks known as branches. In this model, both security and feature updates are tested internally and made available to Windows Insiders. When Microsoft feels the updates are ready for primetime, they’re pushed to the Current Branch (CB). CB devices, predominantly used by consumers, receive the updates immediately through Windows Update.

Businesses and enterprises typically fall under the Current Branch for Business (CBB). Like CB devices, CBB hardware will be able to receive updates as soon as they are published, but can defer those updates for a longer period of time. The rationale for this extra time is two-fold. First, the updates will have received extra scrutniy because they have been tested internally, by Windows Insiders and by consumers via the CB so any issues will likely be resolved, or at least identified, during that time. Second, it gives IT shops time to test the updates and develop strategies to deal with potential problems before those updates become mandatory.

Complicating the situation: There are still unknowns about how IT departments will handle the CBB update cadence and process. Microsoft has yet to complete Windows Update for Business (WUB), a set of features and tools that will be made available to organizations that have adopted the CBB update pace. There is also the possibility of using other tools, including Windows Server Update Services (WSUS), Microsoft’s System Center Configuration Manager (dubbed “Config Manager”), or a third-party patching product that can handle longer postponements.

IT pros aren’t happy

This marks a massive transition in how Windows is deployed, updated and managed in enterprise environments. Many longtime IT pros won’t be comfortable ceding this much control to Microsoft. Susan Bradley, a computer network and security consultant known in Windows circles for her expertise on Microsoft’s patching processes, has become a voice for those IT workers.

In August, Bradley kicked off a request on the matter using Microsoft’s Windows User Voice site asking for a more detailed explanation of the Windows 10 update process. Last month, she upped the ante by starting a Change.org petition demanding additional information from Microsoft as well as a change to how it will deliver updates. As of this week, the petition has more than 5,000 signatures; some signers have noted that they will refuse to move their organizations to Windows 10 unless changes are implemented.

Change.org petition for Windows 10 Change.org

A Change.org petition that has collected 1,600 signatures asks Microsoft CEO Satya Nadella to make his Windows 10 team provide more information to users about updates, and give customers more control over what they install on their PCs.

The impact of the petition remains to be seen. Microsoft has already established that it views its new Windows-as-a-service model, with frequent incremental updates using the branch system, as the future. Windows 10 has already passed the 132-million PC mark and Microsoft appears unapologetic about its plans to pressure users into upgrading to the new OS. All of these factors make it unlikely the company is going reverse course.

This isn’t entirely new territory

The new approach to update management is striking compared to the process for previous Windows releases, but it isn’t exactly a new model. iOS, Android and Chrome OS all limit IT’s ability to manage the update process to one degree or another.

Apple has always placed the user at the center of the iOS upgrade process. When an update becomes available, users can download and install it on day one. iOS 9 introduced the ability for IT to take some control over the process, but only in the opposite direction — allowing IT to require that devices be updated, a move designed less to ensure IT management of the overall process and more to ensure that iPhones and iPads are running to latest, and therefore most secure, version of iOS.

Things are a bit murkier with Android because each manufacturer and carrier generally has to approve the updates and make them available to users, though ultimately it remains up to the user to upgrade when an update becomes available. The update challenge for Android in the enterprise is less about preventing an update and more about the uncertainty of when (or if) devices can be updated.

Chrome OS is essentially updated by Google across all of the devices running it. This is the most apt comparison to Microsoft’s plans for Windows 10. The big difference is that Chromebooks are little more than the Chrome browser and are designed primarily for working with data in cloud-based services. Although the devices do have local storage and support for some peripherals, they are extremely uniform compared to any other major platform (which makes them easier to manage than rivals).

This isn’t to say that IT professionals have always been happy about these platforms or their upgrade processes. iOS and Android were met with skepticism and even hostility by many IT departments. As the platforms have matured into true enterprise tools and it’s become clear they are a necessary part of the enterprise computing landscape, IT has had to adapt to the realities associated with supporting, securing, and managing them.

Part of that adaptation is to the way these platforms get updated.
iOS is a great example of how IT departments already deal with being shut out of a platform’s update process.

With iOS, IT gets very limited lead time about major updates (typically about the three months between Apple’s Worldwide Developers Conference in June and the public release later that same fall). Many IT shops now realize that the next version of iOS will arrive for their organizations the day it’s released. As such, it’s common practice to download and test the developer preview builds through that period to ensure smooth operation on day one. Similarly, many IT departments keep up to date on the previews of minor iOS releases throughout the year.

Microsoft’s update process is going to require a similar adjustment. If Microsoft won’t back down on its position that regular cumulative updates of Windows is the future, IT will need to take a similar approach to Windows that it uses with other platforms.

Windows is not iOS

One major difference between iOS and Windows 10 is that Microsoft still allows updates to be deferred by IT. This means that IT departments have greater lead time for testing and developing plans to address potential pitfalls. Even if IT shops rely solely on the CB release, there is expected to be up to eight months to prep before an update becomes mandatory for CBB PCs and devices. Windows Insiders will get an even longer lead time, since they will have access to updates before public release. In effect, Microsoft is striking a middle ground between Apple’s approach and the approach used in previous Windows versions.

That longer lead time, of course, isn’t a luxury. Windows deployments can be significantly more complicated than those for iOS or Android and almost universally there are more PCs than mobile devices in an organization. Still, using an iOS update strategy as a blueprint is a good starting point for figuring out how to approach Microsoft’s planned Windows 10 update process at work.

It’s also worth noting that IT departments do have some time to develop that strategy. Although Microsoft is clearly ushering anyone and everyone it can onto Windows 10, there’s little need for enterprises to make the switch from Windows 7 immediately — particularly for those that only recently made the jump from XP to 7. Delaying a transition or focusing only on a proof-of-concept or pilot project allows IT departments to get a handle on everything related to Windows 10 before rolling it out, including how to handle updates.

Ignoring Windows 10 isn’t an option

Although it’s possible to delay a Windows 10 transition, perhaps even for years, enterprises are eventually going have to bite the bullet.

Putting off the move is perfectly logical, particularly until the core capabilities to manage Windows 10 and its update process are established. That doesn’t mean, however, that this is a time to be complacent and ignore it completely. Sooner or later, virtually every organization will need to reckon with Windows 10 (or perhaps migrate to non-Windows platforms, which would pose an entirely different set of challenges).

Preparing for that reality, even while pushing back against Microsoft’s current plans, is critical to eventually making a smooth transition.

 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

The evil that lurks inside mobile apps

Written by admin
October 31st, 2015

The evil that lurks inside mobile apps

The Enterprise is at risk from malware and vulnerabilities hiding within mobile apps. You have to test your mobile apps to preserve your security.

Mobile apps are ubiquitous now, and they offer a range of business benefits, but they also represent one of the most serious security risks ever to face the enterprise. The mixing of devices and software for work and leisure opens up many potential avenues for attack, but even purpose-built enterprise apps are shipping with woefully inadequate security protections.

Defects and vulnerabilities commonplace
Did you know that mobile apps typically ship with between one and ten bugs in them?

According to research by Evans Data, only five percent of developers claim to ship apps with zero defects, while 20% ship with between 11 and 50 bugs. Even when testing is conducted, it’s on a limited subset of devices and platform versions.

Many software developers simply don’t have the resources to conduct proper testing before release, especially with the pressure to reach the market faster than everyone else. It’s accepted that many defects will be discovered by customers and fixed later through updates, in fact 80% of developers push out updates at least monthly.

The chance of security vulnerabilities slipping through is very high. But that’s for an average mobile app developer, surely the enterprise takes security more seriously, right?

You may assume that mobile app security testing is a lot more stringent in the business world, but it’s a dangerous assumption to make. Enterprise app developers are subject to the same pressures, and they’re just as likely to forgo security in the rush to market.

BrandPost Sponsored by Adobe
For Optimal Data Security, Control Your PDFs

Yes, people make mistakes that can result in security breaches. But they will make far fewer of them…

Lack of security testing in the enterprise
Many organizations are still taking it on trust that the mobile apps they use are secure. We’ve looked at the importance of assessing third-party vendors before. Almost 40% of large companies, even in the Fortune 500, don’t take the necessary precautions to secure the apps they build for customers, according to research by IBM and the Ponemon Institute.

In fact, one-third of companies never test their apps at all, and 50% of the companies surveyed admitted they devote absolutely no budget to mobile security.

Consider that more than half of businesses are planning to deploy 10 or more enterprise mobile apps in the next two years alone, according to 451 Research. The potential risk here is enormous. More data breaches are inevitable. What’s worse is that many will go unnoticed for long periods of time. The impact on some businesses will be devastating, as security threats too often go ignored. To bury your head in the sand, is to expose your business to potential catastrophe.

Build in security and educate
If you’re only thinking about security at the end of app development, then you’ve already left it too late. You need to build in secure features and adopt stringent testing from day one. That means consulting or hiring security experts during the design phase, and empowering them to influence developers. Focus on data encryption, user authentication, and regulatory requirements.

Monitoring and reporting should be built in to your mobile apps. That way there’s an audit trail to maintain security. Reports can also produce all sorts of useful analytics that help guide future development in the right direction. It’s not just for security, it’s also an important part of ensuring ROI for mobile apps.

It’s worth noting that mobile security at a platform level is improving, but few developers are taking full advantage of the new features designed specifically to secure apps for the enterprise. There has to be some education here. Without input from InfoSec talent, and the right training for developers, there’s no doubt that insecure mobile apps will continue to flood the market.

There’s no substitute for testing
At the end of the day, you will never know if your mobile apps are truly secure unless you test them. Proper mobile security penetration testing is essential. External testers with no vested interest and the right blend of expertise, are best placed to provide the insight you need to uncover dangerous vulnerabilities, and help you mitigate them.

If development continues after release, as your mobile apps are updated with new features and defect fixes, make sure that you consider the security implications and test each new release properly – it’s the only way you can really be sure that your mobile apps are secure.

Click here to view complete Q&A of 70-342 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-342 Training at certkingdom.com

7 ways to ease stress at work

Written by admin
October 17th, 2015

Workplace stress is a fact of life, especially in the IT industry. Keeping that stress to manageable levels can seem like a full-time job in and of itself. Thankfully, there are some easy ways to relax, recharge and rejuvenate, and many of them you can do right at your desk.

Technology
Ironically, the same technology that’s causing you undue stress and frustration can also be used to help manage and reduce it. From biometrics and fitness trackers that monitor your heart rate, blood pressure and the number of steps you take each day, to resilience solutions that guide you to stress-reduction resources (like those from Concern and Limeade), technology is playing a huge role in helping workers chill out and relax.

Meditation
Meditation can be done anywhere, anytime, whether you’ve got five minutes or 50. Take some deep breaths and clear your mind in between meetings, or before a particularly stressful phone call. Meditate on the bus or the subway. You can try mantra meditation, where you silently repeat a word or phrase; mindfulness meditation, which focuses on the flow of your breath and on being conscious of the present moment, or some form of meditative movement like Qigong or yoga.

Exercise breaks
Does your workplace have an on-site fitness center? Use it. Is one of your employee benefits or perks a fitness center membership or reimbursement? Take advantage of that. Even a brisk walk around the block, or jogging up and down the stairs instead of taking an elevator can help get the blood flowing and help relax your mind and energize your body.

Tech time-out
It’s hard to manage stress when you’re constantly reading emails, your smartphone’s ringing off the hook, text messages keep flooding in and your to-do list keeps getting longer. Set aside a certain period of time each day for a tech time-out, says Henry Albrecht, CEO of employee wellness solutions company Limeade. Turn off all your electronic devices and focus on something other than a screen. You could even meditate during this time. You’ll be surprised how peaceful it can be.

Curb Caffeine
No one’s suggesting you give up your morning cup of Joe, but cutting down on caffeine intake, or setting a time of day when you stop drinking caffeinated beverages, can help you better manage stress. “Maybe after, say, 2 p.m., avoid anything with caffeine in it. That can affect your sleep later on in the evening, and if you aren’t well-rested, that will add to your stress,” says Albrecht.

6 sound sleep
Make sure you’re getting your rest, or you’ll be poorly equipped to manage stress. The general rule is eight hours, but some people function optimally on a little more or less. Figure out what works for you and stick to it. And don’t fall asleep in front of the TV, your tablet or your smartphone, either. Research shows that can affect the quality of your REM sleep and impact your rest.

Fix your finances
Financial issues can affect more than just your credit score – taking care of your financial health is critical to maintaining your overall physical and emotional health, too. If you’re struggling financially, check with your HR department to see if they have financial wellness and planning resources available. Or consult a financial advisor or debt consolidation organization. You should also check out free budgeting technology, like Mint, that can help track your spending.

Click here to view complete Q&A of 70-342 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-342 Training at certkingdom.com

How to use stipends to ensure BYOD success

Written by admin
October 3rd, 2015

There are real differences between stipend options, and the success of your program will depend on getting them right

Stipends are a way for businesses to reimburse employees for a portion of their wireless costs and, if implemented properly, address these common issues: cost, eligibility, control and taxes. Here’s how:

* Costs. When businesses talk about costs, they generally are referring to either time or money. And companies opting to use expense reports for stipends will find the task occupies a good bit of both. It’s time-consuming for accounting departments to sort through individual expense reports and issue payments only after an employee’s usage has been verified. It’s no surprise, then, that an Aberdeen Group study suggests each expense report costs $18 to process. Compounding those costs, companies opting for this method will issue hundreds or even thousands of payments each month, so the benefits that attend stipends can be quickly outweighed.

More recently, a few carriers have started to offer a split-billing solution. Split billing attempts to categorize employee usage as either personal or work-related and, in turn, solves some of the issues that expense reports present. For starters, companies could avoid the need to process individual expense reports, as employees’ bills would obviate the need. Unfortunately, though, these split-billing solutions are only partial solutions, as they typically do not account for the voice portion of an employee’s bill. An even larger concern, however, is that split-billing forces employees to align with one carrier, a concept that is at odds with the heart of BYOD: autonomy.

A less discussed but potentially more complete stipend solution is referred to as direct-to-carrier credits. In fact, Gartner has called this process the most effective method for managing BYOD expenses. Simply put, companies determine payment levels based on employee role or any other relevant factor, and then have the stipends applied directly to employees’ bills as a credit.

This solution is typically tied into software that encourages employees to comply with mobile policies and alerts the employer and BYOD solution provider when a device is out of compliance. Plus, by integrating with HR Information Systems, the solution alerts the vendor when an employee’s role or status has changed within the organization.

* Determining Eligibility. Regardless of the stipend approach used, companies must determine which employees are eligible to participate, and many base the decision on roles. For example, an organization may decide to exclude hourly employees from its stipend program. That doesn’t necessarily mean those employees can’t access the network; it simply means they bear the entire costs themselves. If utilizing direct-to-carrier credits, companies may place eligible employees into one of three or more categories. An employee who rarely needs to be contacted outside the office might receive a $35 stipend each month. A salesperson, on the other hand, might receive twice that amount due to the demands of the position. In any event, employees would be assigned a tier by managers and then enroll in the BYOD program over a web portal.

* Taking Control. The decision to reimburse employees for BYOD, at least in California, became clearer with the Cochran ruling. In other states, it may simply come down to control. That is, control over the devices accessing corporate information. For example, if MDM software is required to be downloaded prior to accessing the network, businesses can ensure their employees don’t download certain apps or visit certain sites that may jeopardize security.

Stipends offer a compelling incentive for end users. Employees get help paying their mobile bill (for work-related purposes, of course) and employers get some measure of control over the device itself due to the fact that stipends can be tied into the MDM software in such a way that if a device falls out of compliance the stipends are immediately suspended. Those safeguards are absent from reimbursements made via expense reports. And though stipends may be contingent upon compliance, if those stipends aren’t synced with the MDM software, it does little to prevent a breach or respond quickly to a noncompliant device.

* Limiting Taxes. The Internal Revenue Service (IRS), in Notice 2001-72, thankfully removed mobile devices from the “heightened substantiation requirements” they were subject to prior to 2010. The devices, to avoid tax consequences, have to be provided for substantial noncompensatory business reasons, such as an employee’s need to communicate with clients after normal work hours or the employer’s need to reach the employee during similar off hours.

Shortly thereafter, the IRS issued Interim Guidance on Reimbursement of Employee Personal Cell Phone Usage in light of Notice 2011-72, wherein it addressed reimbursements made to employees for the business use of employee-owned devices. In order for a stipend to avoid taxation based on additional wages or income, the memorandum states that, where employers, for the same substantial noncompensatory business reasons noted in Notice 2001-72, require employees to use their personal cell phones, the employee must “maintain the type of cell phone coverage that is reasonably related to the needs of the employer’s business, and the reimbursement must be reasonably calculated so as not to exceed expenses the employee actually incurred in maintaining the cell phone.”

A tiered approach to stipends that considers the differing needs and demands of various roles within an organization would seem to satisfy those requirements. Though not without shortcomings, split billing solutions clearly satisfy the requirements by separating usage on each bill.

While there is much that is unclear regarding the tax code, the fact that BYOD is growing in popularity every year is undisputed. And as more Millennials enter the workforce, that trend will likely not slow.

BYOD is about more than the wishes of tech-savvy employees; it’s about productivity and the bottom line. To maximize both, companies should strongly consider offering employees a stipend for the work-related use of their personal devices.

While options for paying stipends exist, organizations need to understand there are real differences between those options and, often, the success of a BYOD program depends on how those stipends are offered.

 

MCTS Training, MCITP Trainnig

Best Microsoft MCP Certification, Microsoft MCSE Training at certkingdom.com

F. Scott Fitzgerald didn’t know everything
The tech industry in 2015 is shaped by one executive’s spectacular second act: Steve Jobs, exiled from the company he helped build, returned triumphantly in 1996 to take back control and transform it into a world-changing electronics company. It’s a story that everyone knows, but it’s one that’s almost unique in the tech industry. More common is a different kind of second act: one in which a leader or visionary leaves (voluntarily or not) the role that made them famous and tries something else, something new. Sometimes these new gigs are calmer and more low-key than their first act; sometimes they might seem to be in a very different field; and sometimes they take a tech leader to new heights.

Elon Musk
In 2001, Elon Musk was deposed as CEO of PayPal, a company he helped found and focus on online payments. The coup was motivated, depending on who you ask, over either his autocratic management style or his attempt to move PayPal’s infrastructurefrom Unix to Windows. Most people would’ve been satisfied with having created a service that redefined how people pay for things, and also with a $165 million payout. Instead, Musk went for a double second act, pouring his fortune into Tesla, which aims to transform how cars are powered, and SpaceX, which seeks to make manned spaceflight profitable. It’s pretty difficult to imagine two less grandiose goals to tackle.

Ev Williams
Pyra Labs, co-founded by Ev Williams in 1999, was supposed to make (boring) project management software. But they built a publishing tool for internal use that they called Blogger, which quickly became an outward-facing service, which quickly brought blogging mainstream and got Pyra Labs acquired by Google in 2003.

Flash-forward a few years: Williams leaves Google and helps found Obvious Corp., a sort of incubator with several projects in progress; one of them, launched in 2006, was originally called twttr, and was conceived of as an SMS-based publishing network. Nearly a decade later, Twitter has come to define Web publishing for the ’10s as much as blogging did for the ’00s. Will Williams’s next startup focus on even shorter posts?

Jack Dorsey
While Williams was an important part of Twitter’s origin story, it was Jack Dorsey who laid the foundations for its technology, after having ruminated on similar ideas for much of the first half of the ’00s. Dorsey was Twitter’s CEO in its early years. However, the microblogging service was barely out if its infancy when he launched another endeavor: Square, a service that made it easy to accept credit card payments on smartphones. The company had reached beta status by 2010. Twitter is a media darling and may get more press, but more people probably encounter Square, which aggressively moved to replace standard cash registers with iPads, in real life. In a Jobsian move, Dorsey has also returned to Twitter as CEO, though that seems temporary.

Andy Rubin
Maybe Rubin didn’t have so much a second act as a second try. He was one of the co-founders of Danger, Inc., a company whose Danger Hiptop phone-PDA combo — a smartphone, essentially — was way, way ahead of its time when it arrived on the market in 2002. Rubin left the company, which ended up stagnating before being absorbed by Microsoft, but he wasn’t done with mobile. He quietly started another company, Android, which focused on mobile software, and which was, just as quietly, bought by Google in 2005. Android was the world-changer that proved that sometimes the second time’s the charm.

Carly Fiorina
In tech circles, Carly Fiorina is best remembered for her late ’90s/early ’00s stint as CEO of Hewlett-Packard, which was extremely controversial within the industry; she fought the company’s founding families, dismantled the egalitarian “HP Way,” and, most famously, engineered a much-derided merger with Compaq. Fiorina was fired in 2005, but has chosen a second act even more grandiose than conquering space: politics. Undaunted by a failed 2010 Senate run that featured one of the weirdest campaign ads in living memory, Fiorina is currently running for the 2016 Republican presidential nomination, and in her first big debate managed to humble Donald Trump.

Henry Blodget
Perhaps nobody on this list had their first act end as dramatically as Henry Blodget: as a stock analyst for Merrill Lynch during the dot-com boom he promoted stocks in public that he privately admitted weren’t worth much; he eventually paid a $2 million civil fine and was banned from the securities industry. For his second act, he turned to journalism: he helped found Silicon Alley Insider in 2007, which quickly become part of the Business Insider empire, where Blodget is the editor in chief and CEO. Much of the hostility within the industry towards him has dissipated, and many view him as a sort of kooky uncle, especially when he produces oddball it-happened-to-me articles like this one.

Kevin Rose
Kevin Rose is perhaps emblematic of the sort of second acts many tech execs who hit it big young have: the anticlimactic kind. Rose founded Digg, which for a few years in the ’00s was one of the most important websites on the Internet, with hundreds of millions of views and the power to make or break stories that it linked to. A baby-faced Rose appeared on the cover of BusinessWeek in 2006, though he later claimed the hat and headphones weren’t his. After a disastrous 2010 redesign evaporated Digg’s goodwill, Rose started an app-making shop that got bought by Google and ended up briefly working on Google+, a project that, as we all know, did not end in glory.

James Gosling
Some second acts are lower-key by choice. James Gosling created Java for Sun Microsystems in 1995; when Sun was merged into Oracle in 2010, Gosling left in short order, which was seen as emblematic of the culture clash between the two companies. After a brief five-month stint working for Google, Gosling went in a completely different direction: he took a job with Liquid Robotics, helping build low-power automatic seafaring robots. I imagine this job has to be significantly less stressful than his previous high-profile gigs.

Steve Jobs (again)
Steve Jobs’s return to Apple is so important to the industry that it’s easy to forget that he did have another, truly different second act. In 1986, after he had been ousted from Apple, Jobs spent $5 million to fund the spinoff of LucasFilm’s Graphics Group, which was quickly renamed Pixar. After years of failed attempts to market to special effects artists the custom hardware and software the group had developed, and only a little traction from doing commercial animation, Jobs was almost prepared to sell the company in 1995, when Toy Story was released to near-universal acclaim and massive box office success. The rest was history. Even Jobs’s secondary second act was pretty good.


 

 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Attackers have hijacked thousands of websites running the WordPress content management system and are using them to infect unsuspecting visitors with potent malware exploits, researchers said Thursday.

The campaign began 15 days ago, but over the past 48 hours the number of compromised sites has spiked, from about 1,000 per day on Tuesday to close to 6,000 on Thursday, Daniel Cid, CTO of security firm Sucuri, said in a blog post. The hijacked sites are being used to redirect visitors to a server hosting attack code made available through the Nuclear exploit kit, which is sold on the black market. The server tries a variety of different exploits depending on the operating system and available apps used by the visitor.

“If you think about it, the compromised websites are just means for the criminals to get access to as many endpoint desktops as they can,” Cid wrote. “What’s the easiest way to reach out to endpoints? Websites, of course.”

On Thursday, Sucuri detected thousands of compromised sites, 95 percent of which are running on WordPress. Company researchers have not yet determined how the sites are being hacked, but they suspect it involves vulnerabilities in WordPress plugins. Already, 17 percent of the hacked sites have been blacklisted by a Google service that warns users before they visit booby-trapped properties. Interestingly, Cid added, the attackers have managed to compromise security provider Coverity and are using it as part of the malicious redirection mechanism. The image above shows the sequence of events as viewed from the network level using a debugging tool.

Sucuri has dubbed the campaign “VisitorTracker,” because one of the function names used in a malicious JavaScript file is visitorTracker_isMob(). Cid didn’t identify any of the compromised sites. Administrators can use this Sucuri scanning tool to check if their site is affected by this ongoing campaign.

 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

Hires talent, acquires assets, teams with other provides to reach beyond its core footprint

Comcast Business, which began offering communications services to small businesses in its regional footprint in 2007 and broadened its portfolio in 2010 to appeal to larger organizations in that local realm, today announced an Enterprise Services unit that will go after Fortune 1000 companies regardless of geography.

“We have the right services in our portfolio now, the right performance levels, the right metrics, so we can target businesses outside of our footprint,” says Bill Stemper, president of Comcast Business, which has an annual run rate of $4.5 billion.

The company, which has been spending $1 billion per year to expand its business network, will offer Ethernet, Internet access, advanced voice services, and a range of managed services, including everything from managed router and security services to 3G and 4G backup services, Stemper says.

Comcast Business serves 39 states and 20 of the top 25 markets, representing roughly 45% of the US. To go after “bigger companies, even those not in our footprint, we did three things,” Stemper says:

* Hired a leader to lead the charge, Glenn Katz, who was the CEO of SpaceNet, a service aggregator that supported business customers by pulling together service offerings from different providers.

* Worked with fellow CATV-based service providers to hash out a “cable first solution,” whereby the companies have agreed to buy and sell from one another much like telephone companies cobble together services from different providers today to deliver end-to-end enterprise solutions. Comcast Business says it has reached network agreements with Brighthouse, Cablevision, Charter, Cox, Mediacom, Suddenlink and Time Warner Cable.

* Acquired Contingent Network Services for its expertise in offering managed services to many nationally known businesses. “The company will become a wholly-owned subsidiary of Comcast Business and will continue to operate under the Contingent brand name,” Comcast Business reports.

Asked what will get Comcast Business in the door, Stemper says scalable bandwidth at great price points and the speed at which they can react to customer needs. The customer sweet spot will be banking and finance firms and hospitality and food service organizations that have some centralized offices and data centers and maybe 1,000 scattered branches/outlets, he says.

In terms of what comes next, Stemper sees software defined networking playing a big role. “The new world is Ethernet based, and the more sophisticated businesses want to prioritize apps, customize the manner in which the network works with the apps. So all of us are working on software defined capabilities that gives them that capability, but in a way that is more flexible than traditional MPLS. The new world is going to be more dynamic and customizable and software defined capabilities will be one of the next things we layer in.”

 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

Get ready to live in a trillion-device world

Written by admin
September 14th, 2015

A swarm of sensors will let us control our environment with words or even thoughts

In just 10 years, we may live in a world where there are sensors in the walls of our houses, in our clothes and even in our brains.

Forget thinking about the Internet of Things where your coffee maker and refrigerator are connected. By 2025, we could very well live in a trillion-device world.
[ Stay up to date on tech news with Computerworld’s daily newsletters. ]

That’s the prediction from Alberto Sangiovanni-Vincentelli, a professor of electrical engineering and computer science at the University of California at Berkeley.

“Smartness can be embedded everywhere,” said Sangiovanni-Vincentelli. “The entire environment is going to be full of sensors of all kinds. Chemical sensors, cameras and microphones of all types and shapes. Sensors will check the quality of the air and temperatures. Microphones around your environment will listen to you giving commands.”

This is going to be a world where connected devices and sensors are all around us — even inside us, Sangiovanni-Vincentelli said in an interview with Computerworld during DARPA’s Wait, what? Forum on future technology in St. Louis this week.

“It’s actually exciting,” he said. “In the next 10 years, it’s going to be tremendous.”

According to the Berkeley professor and researcher, we won’t have just smartphones.

We’ll have a swarm of sensors that are intelligent and interconnected.

Most everything in our environment — from clothing to furniture and our very homes — could be smart. Sensors could be mixed with paint and spread onto our walls.

We’ll just speak out loud and information will instantly be given to us without having to do an online search, phone calls can be made or a robot could start to clean or make dinner.

And with sensors implanted in our brains , we wouldn’t even need to speak out loud to interact with our smart environment.

Want something? Just think about it.

“The brain-machine interface will have sensors placed in our brains, collecting information about what we think and transmitting it to this complex world that is surrounding us,” said Sangiovanni-Vincentelli. “I think I’d like to have an espresso and then here comes a nice little robot with a steaming espresso because I thought about it.”

Pam Melroy, deputy director of DARPA’s Tactical Technology Office, said the Berkeley professor isn’t just dreaming.

“I do think there’s something to that” scenario, said Melroy, who is a retired U.S. Air Force officer and former NASA astronaut. “At the very least, we should be preparing for it and thinking of what is needed. We get into very bad places when technology outstrips our planning and thinking. I’d rather worry about that and prepare for it even if it takes 20 years to come true, than just letting it evolve in a messy way.”

While having a trillion-device life could happen in as little as 10 years, Sangiovanni-Vincentelli said there’s a lot of work to be done to get there.

First, we simply don’t have the network we’d need to support this many connected devices.

We would need communication protocols that consume very small amounts of energy and can transmit fluctuating amounts of information, the professor explained. Businesses would need to build massive numbers of tiny, inexpensive sensors. We’ll need more and better security to fend off hacks to our clothing, walls and brains.

And the cloud will have to be grown out to handle all of the data that these trillion devices will create.

“Once you have the technology enabling all of this, we should be there in 10 years,” said Sangiovanni-Vincentelli.

With all of these devices, many people will be anxious about what this means for personal privacy.

Sangiovanni-Vincentelli won’t be one of them, though.

“Lack of privacy is not an issue,” he said. “We’ve already lost it all… If the government wants me now, they have me. Everything is already recorded somewhere. What else is there to lose?”

Melroy also is more excited than nervous about this increasingly digital future.

“As a technologist, I don’t fear technology,” she said. “I think having ways that make us healthier and more efficient are a good thing… There is social evolution that happens with technological evolution. We once were worried about the camera and the privacy implications of taking pictures of people. The challenge is to make the pace of change match the social evolution.”


 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

10 more security startups to watch

Written by admin
September 10th, 2015

Startups focus on encryption, endpoint protection event analysis, radio-frequency scanning

The emergence of cybersecurity startups has continued unabated as entrepreneurs vie for corporate customers seeking new technologies to battle ever increasing and innovative attackers.

The expertise of these new companies range from various improvements to encryption products to analyzing the wealth of security-incident data gathered from networks to gear that detects the potentially malicious wireless activity of Internet of Things devices.

Based on the continued interest in these startups from venture capital investors, these companies will continue to proliferate. Here are 10 more security startups we are watching and why.
barkly

Barkly
Headquarters: Boston
Founded: 2013
Funding: $17 million including seed and Series A financing
Leaders: CEO Mike Duffy and CTO Jack Danahy who worked together at BBN and at IBM
Fun fact: The company name is supposed to remind you of a guard dog waking you to intrusions.

Why we’re following it: In the hot endpoint security space, Barkly promises a lightweight agent to gather data – lightweight in its footprint and in its CPU usage. That makes it less intrusive to end users. Given that its founders promise general availability of its first product by the end of the year and that the company has enough funding for two years, Barkly could be a player. Plus its founders have driven other successful startups, notably OpenPages and Ounce Labs, both bought by IBM.

Bastille
Headquarters: Atlanta
Founded: 2014
Funding: $9 million from Bessemer Venture Partners
Leader: Founder and CEO Chris Rouland who also founded End Game
Fun fact: The initial idea for the company stemmed from a system Rouland devised to make pickup time more efficient at schools by mapping children to the unique radio-frequency signatures of their parents’ cars.

Why we’re following it: With the proliferation of wirelessly connected Internet of Things devices inside enterprises, security professionals lack technology to adequately monitor what they are up to or even that they are inside the network. Bastille’s monitoring of corporate airspace for such devices and analysis that reveals when they are acting maliciously is a means to gain that important intelligence and pass it along to existing security tools.

eGuide
Sponsored
Consumer Identity & Access Management Buyer’s Guide
Video/Webcast
Sponsored
How to Find Hidden Malware Lurking in Your Systems

Bitglass
Headquarters: San Jose
Founded: 2013
Funding: $35 million in two round from Norwest Venture Partners, NEA and SingTel Innov8
Leaders: Founders CEO Nat Kausik, CEO of four startups one bought by Cisco, one by CA, and CTO Anurag Kahol, a Juniper alum
Fun fact: The company ran an experiment to track what happens to stolen credit card data and found that once posted on the Dark Net it was opened more than 1,000 times in 12 days.

Why we’re following it: The company’s patented technology makes it safe to store corporate data in the cloud without degrading the speed at which the data can be searched, a common problem with searching encrypted files. Rather than store the data encrypted in the cloud, it stores an encrypted handle representing the data. When the data needs to be retrieved, the handle is downloaded and the full file is pulled from a database stored securely within the corporate network. This allows a high level of encryption (AES 256) as well as speedy search.

FinalCode
Headquarters: San Jose
Founded: 2014
Funding: Digital Arts
Leaders: CEO Gord Boyce, COO Scott Gordon (both formerly with ForeScout)
Fun fact: The company is a spin-out from Japanese email and Web-filtering company Digital Arts which wanted to focus on selling the platform in the U.S. market.
Top News

Michael Dell
Dell expanding in China with $125B investment over five years
North Korea is likely behind attacks exploiting a Korean word processing…
apple tv 01
Apple’s tvOS: Can connected-home apps be far behind?

Why we’re following it: FinalCode takes the work out of managing the complex key management that is necessary to encrypt documents and have decryption rights follow the documents around wherever they go. It allows flexibility for where these permissions are stored, either in its cloud or within customers’ firewalls. The platform makes using document sharing services such as Box and Dropbox secure enough to handle corporate information but doesn’t require any changes to the services themselves.
ionic

Ionic Security
Headquarters: Atlanta
Founded: 2011
Funding: $78.1 million from Kleiner Perkins Caufield & Byers, Meritech Capital Partners and Google Ventures
Leaders: CEO Steve Abbott, with roots in Symantec, PGP and Network Associates, and CTO Adam Ghetti, named a 2015 Technology Pioneer by the World Economic Forum
Fun fact: The company used to be called Social Fortress.

Why we’re following it: Ionic’s service encrypts documents using symmetric-key encryption, then manages the key, taking a huge burden off its customers. In addition to securing entire documents from anyone but authorized users, it can secure parts of a document so that one group of recipients can see all of it, but others can decrypt only a designated portion. It also monitors who is actually opening up documents.

Menlo Security

Headquarters: Menlo Park
Founded: 2013
Funding: $35.5 million through Series B funding from Sutter Hill Ventures, General Catalyst, Osage University Partners and Engineering Capital
Leaders: CEO Amir Ben-Efraim and Chief Product Officer Poornima DeBolle, both formerly with Juniper Networks
Fun fact: The founders are commercializing technology licensed from University of California at Berkeley research.

Why we’re following it: Menlo Security offers a simple service that looks to be effective at stripping malware from email and Web traffic. It does this by proxying all such traffic to the company’s cloud where any code is executed in a container. Only a rendering of the content reaches the user’s browser, so it is free of any potential malware. For upstream traffic, the code in the container proxies back to the servers.

Niara
Headquarters: Sunnyvale
Founded: 2013
Funding: $29.4 million from Venrock, New Enterprise Associates (NEA) and Index Ventures through two rounds.
Leader: CEO Sriram Ramachandran with executive experience at Aruba, Juniper, Netscreen and Neoteris
Fun fact: The name Niara means haystack in Spanish, and has no particular significance relating to what the company does.

Why we’re following it: The company makes a security-event analyzer that correlates events that could be signs of attack, assigns them severity scores and issues alerts. The upside for customers is the analyzer takes input about events from existing security platforms, enhancing their usefulness. The goal is to provide very necessary screening and prioritizing of events for human security analysts to check out rather than going through them manually – an overwhelming task. This platform could help businesses better deal with the security information they already gather without having to drastically increase hard-to-find security staff.

Red Canary
Headquarters: Denver
Founded: 2014
Funding: $2.5 million in seed funding led by Kyrus-Tech
Leaders: CEO Brian Beyer, head of detection operations Keith McCammon, research and development chief Jason Garman and engineer Chris Rothe who all worked together at Kyrus and, except for Rothe, at ManTech
Fun fact: The company name invokes the proverbial canary in the coal mine that warns miners of poison gasses.

Why we’re following it: The company offers a service necessary to many businesses – human analysts who sort through security alerts to eliminate false-positives before alerting their customers to the danger. The cost and scarcity of qualified security analysts puts in-house staffing beyond the budgets of businesses of varying sizes. Red Canary’s focus is on analyzing security event data and it delegates gathering that data to other vendors – Bit9+CarbonBlack for endpoint sensors and threat intelligence from Threat Recon, Farsight Security and Bit9+CarbonBlack’s Threat Intelligence Cloud, in addition to its own threat intelligence.

Soha Systems
Headquarters: Sunnyvale
Founded: 2013
Funding: $9.76 million in venture funding from Menlo Ventures, Andreessen Horowitz, Cervin Ventures and Moment Ventures
Leaders: CEO Haseeb Budhani (Infineta and NET) , Vice President of Engineering Hanumantha Kavuluru (MobileIron, Nortel), and Vice President of Marketing Rob Quiros (Cisco, Riverbed)
Fun fact: Soha it the Arabic name for a star Arabs used to test their vision

Why we’re following it: Soha provides cloud-based security services that reduce the time, cost and expertise required when compared to buying and deploying infrastructure to accomplish the same goals. The service includes authentication, authorization, application firewalling, WAN optimization and server load balancing among multiple application instances. It has a dashboard that shows how accessible their applications are.

Vera
Headquarters: Palo Alto
Founded: 2014
Funding: $14 million from Battery Ventures
Leaders: CEO Ajay Arora has worked at startups acquired by Cisco, Intel and IBM
Fun fact: Arora says if the company were a superhero it would be Violet Parr from The Incredibles because she can generate an invisible shield.

Why we’re following it: Vera software imposes encryption on documents that follows them around until a legitimate recipient authenticates to release the decryption keys. That has security benefits, but this is also done with minimal altering of how users interact with the application whose files are being encrypted. It can be used on any device and in conjunction with other security tools. All this means that the product not only secures information, it is unobtrusively enough that it will hurdle obstacles to adoption.


 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

<