Archive for the ‘ Tech ’ Category


The 15 biggest enterprise ‘unicorns’

Written by admin
August 31st, 2015

The Wall Street Journal found 115 companies valued at more than $1 billion, these are the 15 biggest enterprise tech ones

Yester-year there were only a few unicorns in the world of startups.

This week though, the Wall Street Journal and Dow Jones VenturSource identified 115 companies with valuations north of $1 billion, which are referred to as unicorns.

Below are 15 of the highest valued enterprise software companies that have received venture funding but have not yet been sold or gone public.

Palantir
Valuation: $20 billion
Funding: $1.5 billion

What it does: Palantir has created a program that’s really good at finding relationships across vast amounts of data, otherwise known as link analysis software. Its meteoric rise has been fueled by big-money contracts with federal government agencies. Palantir is the second-largest unicorn, behind Uber, that The Wall Street Journal identified.

Dropbox
Valuation: $10 billion
Funding: $607 million

What it does: One of the pioneers of the cloud market, Dropbox’s file synch and share system has been a hit with consumers, and increasingly with businesses too. Chief competitor Box would have been a unicorn, but the company went public this year.

Zenefits
Valuation: $4.5 billion
Total funding: $596 million

What it does: Zenefits provides a cloud-based human resource management (HRM) system for small and midsized business, with an emphasis on helping businesses manage health insurance administration and costs.

Cloudera
Valuation: $4.1 billion
Total funding: $670 million

What it does: Cloudera provides a distribution of Hadoop. It’s chief competitor in the big data/Hadoop market, Hortonworks, filed for an initial public offering earlier this year after being a unicorn itself.
Resources

Pure Storage
Valuation: $3 billion
Funding: $530 million

What it does: Pure storage is one of the most popular startups in the solid-state, flash-storage market. It pitches its hardware-software product as a more affordable competitor to storage giant EMC.

Docusign
Valuation: $3 billion
Funding: $515 million

What it does: Docusign lets users electronically sign and file paperwork.

Slack
Valuation: $2.8 billion
Funding: $315 million

What it does: Slack is an enterprise communication and collaboration platform, allowing users to text and video chat, plus share documents too.

Nutanix
Valuation: $2 billion
Funding: $312 million

What it does: Nutanix is one of the startups in the hyperconvernged infrastructure market, providing customers an all-in-one system that includes virtualized compute, network and storage hardware, controlled by a custom software. Converged systems are seen as the building blocks of distributed systems because of their ability to optimize performance, particularly on the storage side.

Domo
Valuation: $2 billion
Funding: $459 million

What it does: Founded by Josh James (who sold his previous startup Omniture to Adobe for $1.8 billion), this Utah-based company provides business intelligence software hosted in the cloud tailored for business executives. The idea is to provide c-level executives at companies ready access to important data they need to run their companies in a user-friendly format accessible on any device.

GitHub
Valuation: $2 billion
Funding: $350 million

What it does: GitHub is a platform for storing software that makes up open source projects. These repositories can be public or private and allow users to track bugs, usage and downloads. If you use an open source project, it’s likely hosted on GitHub.

Tanium
Valuation: $1.8 billion
Funding: $142 million

What it does: Tanium is a platform for identifying and remedying application outages or security threats in real-time. One of it biggest differentiating features is an intuitive search bar that allows users to quickly search in natural language to check the status of the system they’re monitoring for a variety of issues.

MongoDB
Valuation: $1.6 billion
Funding: $311 million

What it does: MongoDB is one of the most popular NoSQL databases. These new breeds of databases are ideal for managing unstructured data, like social media streams, documents and other complex data that don’t fit well into traditional structured databases.

InsideSales.com
Valuation: $1.5 billion
Funding: $199 million

What it does: InsideSales.com is a big data platform that analyzes business relationships with customers and provides predictive analytics for future sales strategy.

Mulesoft
Valuation: $1.5 billion
Funding: $259 million

What it does: Mulesoft is the commercial product for the open source Mule software, an enterprise service bus that helps integrate and coordinate data across applications. Having a common data set that multiple applications can use reduces duplication and cost.

Jasper Technologies
Valuation: 1.4 billion
Funding: $204 million

What it does: Jasper Technologies creates a platform for the budding Internet of Things. The company’s software allows data generated by machines to be stored and analyzed in the company’s software.

 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Newly hired graduates are often technology savvy, but not very enterprise security savvy. That can be a dangerous combination.

Newly hired college grads are a particular security risk to your organization, and special measures need to be taken to manage this “graduate risk.”

That’s the view of Jonathan Levine, CTO of Intermedia, a Calif.-based cloud services provider whose customers employ many recent graduates.

“The problem is that new graduates are often very computer savvy, but unfortunately they are not enterprise savvy,” he says. That’s different to what was the case in the past – certainly when many current CIOs took their first jobs – where most graduates knew nothing about computers or the security requirements of the organizations they were joining.

He points out that from middle school or even earlier students use apps to do their school work, and use various services to share documents. But they are rarely educated about corporate requirements like information security and confidentiality.

“Coupling a technical literacy in tools like Dropbox and Snapchat with a naiveté about the way that enterprises need to operate is a dangerous combination,” Levine warns.

That means it’s your IT department’s or security team’s responsibility to provide security education to graduates. This should warn them of the dangers of using consumer services, such as cloud storage or webmail, that generally offer inadequate auditing, management capabilities and security for use in an enterprise environment.

“Data loss is a big risk that graduates can introduce when they come from an academic environment,” Levine says. “They come from an environment where information wants to be free and open source programming is common, to the corporate world where we want some sorts of information to be free – and some definitely not to be free.

“We may want information to be shared, but we need to be able to know who is accessing it,” he adds.

Graduates also introduce a disproportionate risk that information useful to hackers may be shared on social media services such as Facebook or Twitter. That’s simply because they’re accustomed to using these services without thinking about the security implications of what they’re making public.

While educating graduates is key, making sure that they put what they learn into practice is also important. Here are six ways you can help ensure that this happens:

1. Judge graduates on the security they practice. Newly hired graduates usually undergo some sort of appraisal or performance review process on a regular basis. This provides the opportunity to make security – and adherence to security practices – a goal that new hires can be evaluated on.

2. Gamify security. Despite the name, this does not involve turning security into a game. Rather, it involves running incentivized security awareness programs.

This approach encourages graduates to attend security courses or gain security qualifications – which may just be internal courses or qualifications run or awarded by the IT department.

As graduates progress they can be awarded points that earn rewards appropriate to the organization, such as certificates, prizes, corporate perks or monetary bonuses.

3. Monitor graduate behavior. This adheres to the old adage of “trust but verify.” The idea is that the IT department should monitor certain aspects of graduate’s IT usage so that their managers can better understand how well they are adhering to security best practices – and intervene when necessary.

4. Make security easy. One way to reduce graduates’ temptation to use consumer services is to ensure that there are enterprise-grade alternatives that are attractive and easy to use.

So while it may be hard to get a graduate who has grown up with Gmail to start using an email client like Outlook that they may see as ugly and unwieldy, it may be easier to wean graduates off Gmail by providing alternatives. This could be something as simple as Outlook Web Access, or a more sophisticated alternative like offering access to Exchange data on a mobile device such as an iPhone or Android tablet using ActiveSync.

5. Run a security event. As an example, Levine says Intermedia runs a “Hacktober” event every fall. During the event the security team does everything that it has warned graduates against, such as leaving USB keys around (that contain harmless malware) and sending out phishing emails (which also do no real harm.)

The team can then contact any graduates who pick up and use these USB sticks or who respond to the phishing emails – and graduates can gain kudos but reporting that they have spotted these planted USB devices or phishing emails.

6. Quick win. If there’s one single thing you can do to make a big difference, Levine believes it is to drum it in to new graduates that they need to use separate passwords for each corporate system or application that they log in to.

It’s important to make sure that these are different to any passwords they use to provide access to consumer services. That’s because consumer services are tempting targets for hackers because they often have poor security, and if a hacker can get a password from a consumer service that’s also used in a corporate environment then that presents a significant security risk.

 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Who’s upgrading to Windows 10?

Written by admin
August 20th, 2015

In the three weeks since the new OS’s debut, Windows 8.1 users have been the most willing to migrate

Windows 8.1 users have been half again as likely to upgrade to Windows 10 as their compatriots running Windows 7, data from a Web metrics vendor showed today, confirming expectations about who would upgrade first to Microsoft’s new operating system.

The ascension of Windows 10’s usage share has largely come at the expense of Windows 8.1, according to measurements by Irish analytics company StatCounter. Of the combined usage share losses posted by Windows 7, Windows 8 and Windows 8.1 since the last full week before Windows 10’s July 29 launch, 57% has been attributed to Windows 8.1 deserters.

Windows 7, meanwhile, contributed 37% of the losses by the last three editions, and Windows 8, 6%.

The disparity was not unexpected: Most pundits and analysts figured that users of Windows 8.1 — like Windows 7, eligible for a free upgrade — would be first in line to dump their existing OS and migrate to the new. The changes in Windows 10, including the restoration of the Start menu and windowed apps, were most attractive to Windows 8 and 8.1 users, experts believed, because their removal had been widely panned.

Simply put, Windows 7 users, who were more satisfied with the OS Microsoft gave them, would be less motivated to upgrade. That’s been proven out by StatCounter’s early numbers.

But there were recent signs that Windows 7 users have begun jumping to Windows 10 in numbers nearly equal to Windows 8.1.

During the week of August 10-16, the difference between the declines in Windows 7 and Windows 8.1 was the smallest it’s been since Windows 10’s debut. In that week, Windows 7 lost 0.55 percentage points of usage share, only slightly less than the 0.64 percentage points given up by Windows 8.1. The week before — August 3-9 — the gap between the two was much larger: Windows 7 lost 0.95 percentage points, while 8.1 declined by 1.42 points.

StatCounter’s data also illustrated just how important Windows 7 conversions will be to Windows 10’s ultimate success — as Microsoft has defined it, that would mean 1 billion devices running the operating system by mid-2018. Even if it coaxed every Windows 8 and 8.1 user into upgrading, Microsoft would be looking at a usage share of less than 21% for Windows 10. It must convince large segments of Windows 7’s base to migrate as well.

That may require modification of the Windows 10 pitch, perhaps with less talk about the return of the Start menu, say, and more about enhanced security. Working against Microsoft are a plethora of Windows 10 behaviors, particularly its mandated updates and the concurrent loss of control over what reaches customers’ devices and when. That has raised hackles among the traditionalists who stuck with Windows 7.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

A new report from Google finds a disconnect between online security best practices from experts and users. Here’s where the groups differ.

How secure are you?
When it comes to online security, experts and users don’t always agree on the most effective ways to stay safe, according to a new report from Google.

The company surveyed 294 users and 231 security experts (participants who worked five or more years in computer security) to better understand the differences and why they exist. Here’s what they found.

Software updates
Installing software updates was the security practice that differed the most between security experts and users, according to the report. Thirty-five percent of experts mentioned it as a top security tactic, compared to just 2 percent of non-experts.

A lack of awareness of how effective software updates are might explain users’low numbers, the report said. “Our results suggest the need to invest in developing an updates manager that downloads and installs software updates for all applications—much like mobile application updates on smartphones,” it said.

Antivirus software
Using antivirus software was the security action mentioned by most users relative to experts. Forty two percent of users said that running antivirus software on their personal computers is one of the top-three things they do to stay safe online, compared to just 7 percent of experts.

Firewalls
Firewalls also ranked high among users, which 17 percent mentioned in their top-three security actions, often in conjunction with antivirus software. Just 3 percent of experts prioritized firewalls as high. Experts cautioned against antivirus software and firewalls, calling them “simple, but less effective than installing updates”and “less sophisticated.”

Passwords
Using strong and unique passwords were some of the most mentioned strategies by both groups, the report found. While more experts than users emphasized unique passwords (25 percent vs. 15 percent) fewer talked about having strong passwords (18 percent vs. 30 percent). Users also prioritized changing passwords more often than experts (21 percent vs. just 2 percent).

Password managers
Despite password specifics claiming two of their top-five spots, using password managers ranked low among users. Just 3 percent of users mentioned using the tools, compared to 12 percent of experts. Adopting password managers rounded out the top five security practices for experts.

Furthermore, just 32 percent of users ranked password managers as very effective or effective, while only 40 percent said they would follow advice to use them. Users commented that password managers were too “complicated for non-technical users.”

“Users’ reluctance to adopt password managers may also be due to an ingrained mental model that passwords should not be stored or written down—advice users have been given for decades,”the report said. “Password managers can make it feasible to use truly random and unique passwords to help move users away from memorable passwords, which are vulnerable to smart-dictionary attacks.”

Two-factor authentication
While password managers ranked low among users, they rated the use of two-factor authentication considerably higher, both in terms of effectiveness (83 percent) and likelihood of following advice (74 percent). Experts, however, expressed concerns that two-factor authentication is still too difficult for many users and not widely enough available.

“Additional work needs to be done to understand why non-experts are not using two-factor authentication,” the report said. “Some of the expert participants in our study offered several reasons, including the fact that this security feature is still to difficult to explain to non-tech-savvy users, that it is not available on all websites and that it causes significant inconvenience.”

Visiting only known websites
After using antivirus and changing passwords frequently, the practice most mentioned by users relative to experts was visiting only known websites. Twenty-one percent of users—compared to just 4 percent of experts—said they only go to known or reputable websites to stay safe online.

Experts polled by Google pointed out problems with this advice: “Visiting only known websites is great, but paralyzing,” one respondent commented, while another said, “Visiting websites you’ve heard of makes no difference in a modern web full of ads and cross-site requests.”

HTTPS
Using HTTPS is not a major priority for neither the experts nor users, the report found. Just 10 percent of experts and and 4 percent of users placed it in their top-three actions. A majority of both groups, however, said they often look at the URL bar to verify HTTPS (experts: 86 percent; users: 59 percent).

Browser cookies
More than half (54 percent) of users considered clearing browser cookies an effective security measure, while the same percentage of security experts called this practice “not good”or “not good at all.”

Security experts commented that doing so might be ok to prevent session hijacking, but “the annoyance of logging in again might throw some users off.”


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

Black Hat 2015: Spectacular floor distractions

Written by admin
August 8th, 2015

As if hacked cars and massive Android vulnerabilities weren’t enough to keep the attention of security experts attending Black Hat 2015 in Las Vegas, the vendors at this increasingly vendor-driven show were wheeling out shiny distractions ranging from food and drink to celebrity lookalikes to custom art and free giveaways.

May Black Hat be with you
As if hacked cars and massive Android vulnerabilities weren’t enough to keep the attention of security experts attending Black Hat 2015 in Las Vegas, the vendors at this increasingly vendor-driven show were wheeling out shiny distractions ranging from food and drink to celebrity lookalikes to custom art and free giveaways. Here’s a look at some of what helped keep Black Hat entertained. (See all the stories from Black Hat.)

Bring it on
Black Hat 2015 – Time for ice cream, cocktails, massages

Free T-shirts
Free t-shirts from Dell Software that were custom silk-screened inreal-time and emblazoned with a choice of logos.

I’ll be back next year
Cyborg Aaahnold lookalike dressed up as The Terminator guarding Blue Coat’s booth.

The force is with her
A worker defends the RSA booth with light sabers.
On the juice too
Yoda offers up Jedi Juice energy drink at the Palo Alto booth.

Star Wars
A classic Star Wars video game at the ThreatConnect booth.

We scream for ice cream
Free ice cream for the taking provided by the show.

Say cheese
Free cheese and crackers.

To go with the cheese
Free mojitos in lit stem glasses.

You are so tense
Massages to take away the stress of worrying about network security.

Trust the ball
A little Skee Ball at the BeyondTrust booth to bring back childhood carnival memories.

The Monstah
A replica of Boston’s Fenway Park at the Parsons booth so they could show how to pull the plug on the lights with a switch hack.

Message from the wife?
A fox loses his head so he can check his texts outside the ZeroFox booth.

UFO
Out-of-this-world booth theme decorations like this of Area 51 set up by Alien Vault.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Toshiba’s BiCS technology stacks 48 layers of microscopic NAND layers atop one another, vastly increasing memory density. Credit: Toshiba

The new 3D NAND chip is designed for wide use in consumer, client, mobile and enterprise products

SanDisk and Toshiba announced today that they are manufacturing 256Gbit (32GB), 3-bit-per-cell (X3) 48-layer 3D NAND flash chips that offer twice the capacity of the next densest memory.

The two NAND flash manufacturers are currently printing pilot the 256Gb X3 chips in their new Yokkaichi, Japan fabrication plant. They are expecting to ship the new chips next year.

Last year, Toshiba and SanDisk announced their collaboration on the new fab wafer plant, saying they would use the facility exclusively for three dimensional “V-NAND” NAND flash wafers.

At the time of the announcement, the companies reported the collaboration would be valued at about $4.84 billion when construction of the plant and its operations were figured in.

In March, Toshiba announced the first 48-layer 3D V-NAND chips; those flash chips held 128Gbit (16GB) of capacity.

The new 256Gbit flash chip, which uses 15 nanometer lithography process technology, is suited for diverse applications, including consumer SSDs, smartphones, tablets, memory cards, and enterprise SSDs for data centers, the companies said.

Based on a vertical flash stacking technology that the companies call BiCS [Bit Cost Scaling], the new flash memory stores three bits of data per transistor (triple-level cell or TLC), compared to the previous two-bit (multi-level cell or MLC) memory Toshiba had been producing with BiCS.

“This is the world’s first 256Gb X3 chip, developed using our industry-leading 48-layer BiCS technology and demonstrating SanDisk’s continued leadership in X3 technology. We will use this chip to deliver compelling storage solutions for our customers,” Siva Sivaram, SanDisk’s executive vice president for memory technology, said in a statement.
sandisk nand manufacturing image2

SanDisk and Toshiba’s fab operations in Yokkaichi, Japan where the new 48-layer 3D V-NAND chip is being produced.

Last year, Samsung became the first semiconductor manufacturer to begin producing 3D NAND. Its V-NAND chip provides two to 10 times higher reliability and twice the write performance, according to Samsung.

Samsung’s V-NAND uses cell structure based on 3D Charge Trap Flash (CTF) technology. By applying the latter technologies, Samsung’s 3D V-NAND can provide more than twice the scaling of today’s 20nm-class planar NAND flash.

Samsung is using its 3D V-NAND for a wide range of consumer electronics and enterprise applications, including embedded NAND storage and solid-state drives (SSDs). Samsung’s 3D NAND flash chips were used to create SSDs with capacities ranging from 128GB to 1TB.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

After careful consideration you’ve decided it’s time to migrate a major on-premise software solution to the cloud. But how do you create and execute a plan to make sure your migration stays on time, on budget, and delivers on your expectations? Effective planning is critical, and it should start with a thorough assessment of your infrastructure by an experienced vendor who understands your specific objectives.

Usually available as a service engagement from a hosting vendor or, better yet, from the software vendor whose solution is being migrated to the cloud, this cloud readiness assessment is part checklist and part roadmap. It audits the entire environment so you can plan and execute an efficient and effective migration.

Why should you consider such a service? It takes the pressure off. Too many organizations attempt to go it alone, which usually means asking overworked IT staff to try to “fit it in.” Today, the average IT department is already responsible for multiple systems, often as many as seven or eight. Trying to add a project as large and complex as an enterprise cloud migration to is simply not realistic. Not only is that approach a disservice to those tasked with making it happen, it also sends the wrong message about the size and importance of the project. Future problems are usually inevitable.

A cloud readiness assessment may also help you achieve a faster time to value. Remember, when you go to a SaaS model, ROI has a completely different meaning. For example, you are no longer looking to recover your long-term capital investment, but instead, expecting to gain instant value from your new OpEx spending. A cloud readiness assessment can help you carefully plan the migration so you can achieve a faster time to value.

Finally, a vendor’s cloud readiness team can usually deliver skills and specialized expertise required for the specific solution that you or hosting provider might not have in-house. These teams are truly cross-functional, with a mix of expertise in project management, technical implementations, business processes, industry-specific insights, and more. Additionally, these teams usually have dozens, if not hundreds, of migrations under their belts.

While no one can say they’ve seen it all, these teams are typically astute and can help you identify potential obstacles – challenges you may not have been aware of – before they become unmanageable.

For example, a cloud readiness team will carefully evaluate your existing environment and document all aspects of your infrastructure that could be affected. This includes your entire architecture, including databases, applications, networks, specialized hardware, third-party interfaces, extensions, customizations, and more. Then, they create a comprehensive report that details these findings as well as their recommended action plan to achieve the most successful migration possible.

To better understand how a cloud readiness offering could work – and its ultimate benefits – consider the example of moving an on-premise workforce management solution to the cloud. Workforce management solutions are generally large, enterprise-level implementations that span employee-focused areas such as time and attendance, absence management, HR, payroll, hiring, scheduling, and labor analytics.

The example of workforce management is especially relevant because recent research shows that an increasing number of workforce management buyers are adopting SaaS tools. Research shows that SaaS will be the main driver in growing the global workforce management market by almost $1.5 billion from 2013 to 2018. Additionally, Gartner research indicates, through 2017, the number of organizations using external providers to deliver cloud-related services will rise to 91 percent to mitigate cost and security risks as well as to meet business goals and desired outcomes.

This research demonstrates that a majority of companies will soon be moving their on-premise workforce management systems to the cloud. But will they be successful?

They have to be. Workforce management systems manage processes and data related to paying employees, managing their time and balances, storing sensitive HR information, complying with industry regulations, and other critical functions. Errors can be extremely costly, especially if they lead to missing paychecks, employee morale issues, lost productivity, grievances and compliance, or even potential lawsuits. Failure is simply not an option.

A cloud readiness service is the perfect way to minimize these risks and maximize the results. Specifically, a readiness service is ideally suited to address specialized areas of a workforce management deployment, including:

* Data collection terminals. While many employees still refer to these as “timeclocks,” the fact is that today’s data collection devices are sophisticated proprietary technology consisting of hardware, software, and network/communication capabilities. As part of a migration, a readiness audit would assess the organization’s data collection methods. It would also provide recommendations for transitioning them to a secure network model that meets the organization’s security and performance objectives while ensuring that service is not interrupted when the switchover occurs.

* Interfaces and integrations. Like other enterprise-level technology, workforce management solutions tend to use many different interfaces and custom integrations to feed applications such as ERP systems, outside payroll systems, or third-party analytics applications. In this example, the readiness assessment evaluates the entire integration strategy, including database settings, to make sure mission-critical data continues to flow to support existing business processes.

* Customizations and configurations. Most organizations have custom reports, products, or database tables. Here, the cloud readiness service will thoroughly review existing customizations and configurations, and will provide recommendations to maintain, or even improve, the value they deliver.

When it comes to something as significant — and important — as migrating a major enterprise solution to the cloud, don’t go it alone. Investing in a cloud readiness service can help you assess where you stand today, plan for the migration, and execute against the plan. This helps free up valuable IT resources to focus on what’s really important – implementing strategic initiatives to help the business grow.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

If the beta version of Apple’s next mobile OS is causing problems on your iDevice, there’s an easy out

his is a time of temptation for Apple enthusiasts, many of whom are eager to get their hands — and devices — on the company’s newest software. Between June, when company execs tout the upcoming versions of Apple’s desktop and mobile operating systems, and the fall, when the polished, finished versions arrive, Apple users get a chance to serve as beta testers.

Having a hardcore set of fans eager to try out the latest software is a benefit that Apple has embraced. Last year, it allowed users to check out pre-release versions of OS X 10.10 Yosemite. This year, they can beta test OS X 10.11 El Capitan and — for the first time — an early version of the company’s mobile operating system — in this case, iOS 9. (Not available as a public beta is the pre-release build of Watch OS, which is a good thing; some of the developers that have tried it have found it to be unstable, and who wants to brick their brand new Apple Watch?)

To do so, users must sign up for Apple’s Beta Software Program, which is free. The program allows access to relatively stable versions of the pre-release software and gives Apple engineers a wider audience to test it. That, theoretically, leads to more bugs uncovered and fixed before the final release. Public betas roll out every few weeks — the most recent one arrived yesterday.

Apple

The problem with the time between beta and final releases is that many people who aren’t developers or technology insiders use their primary device to test what is actually unfinished software — and pre-release software is historically unstable, at best. Yes, Apple routinely warns you not to use your main iPhone, iPad or desktop to test the software. And users routinely ignore that advice.

But there’s good news for iPhone and iPad owners who took the plunge into iOS 9 and have now decided — whether because of problematic apps or the need for a more stable OS — they prefer iOS 8. You can downgrade your device, and it’s not even that difficult to do. But there is a caveat: Any data accumulated between the last time your device was backed up running iOS 8 and since the upgrade to iOS 9 will be lost, even if you recently backed up your data. Put simply, you cannot restore backup data from iOS 9 to a device running iOS 8; it’s not compatible. The best you can do is restore from the most recent backup of iOS 8.

Assuming you still want to return to iOS 8, here’s what to do.
If you’re a public beta tester (who hasn’t signed up to be full-fledged developer), you can downgrade your iDevice by putting it into DFU mode. (DFU stands for Device Firmware Update.) You use this method to restore iOS 8 without having to get the older operating system manually.

First, perform a backup via iCloud or iTunes. Even though you won’t be able to use this data on iOS 8, it’s always better to have a backup than not. Then go to Settings: iCloud: Find My iPhone and turn off Find My iPhone.

Then follow these instructions to put the iPhone into DFU mode: Turn off the iPhone and plug it into your computer. Hold the Home button down while powering on the phone, and hold both until you see the Apple logo disappear. You can release the power button, but continuing holding down the Home button until you see the iPhone’s screen display instructions to plug the device into an iTunes-compatible computer. When prompted on your computer, click on the option to Restore, and iTunes will download the latest released version of iOS for your device.

If you’re a developer, log into the Apple Developer portal (after you turn off Find My iPhone), click on the section for iOS and download the latest officially released build. As of now, that’s iOS 8.4. Once the software is downloaded, open iTunes and click on the iPhone/iPad/iDevice tab. Within the Info tab, there are two buttons: Update and Restore. Hold down the Option button on the keyboard while clicking Restore. Navigate to the file that was just downloaded and select it. The software will then erase the iPhone or iPad of its contents and install that previous version of iOS.

Note: When downgrading to the previous version, make sure to option-click Restore; do not choose Update. Doing that will lead to a loop in which the iPhone is placed in Recovery mode, iTunes attempts to download and install the latest official build, runs into errors, and then attempts to download another copy of the official build. It will do that until you break the cycle and choose to Restore the device. So again, don’t select Update.

Given that Apple software upgrades now routinely roll out in the fall, upgrading your devices to unstable software isn’t a good way to spend the summer. For most people, I’d recommend waiting. The latest features are really only worth having when your device is stable, especially if it’s something you rely on day in and day out. But if running the latest software is your thing, then by all means, have at it. And at least if you run into problems on your iDevice, you now know how to get out of trouble.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

Endpoint protection technology is making strides and may soon be touted as anti-virus

Rather than looking for signatures of known malware as traditional anti-virus software does, next-generation endpoint protection platforms analyze processes, changes and connections in order to spot activity that indicates foul play and while that approach is better at catching zero-day exploits, issues remain.

For instance, intelligence about what devices are doing can be gathered with or without client software. So businesses are faced with the choice of either going without a client and gathering less detailed threat information or collecting a wealth of detail but facing the deployment, management and updating issues that comes with installing agents.

Then comes the choice of how to tease out evidence that incursions are unfolding and to do so without being overwhelmed by the flood of data being collected. Once attacks are discovered, businesses have to figure out how to shut them down as quickly as possible.

Vendors trying to deal with these problems include those with broad product lines such as Cisco and EMC, established security vendors such as Bit9+Carbon Black FireEye, ForeScout, Guidance Software and Trend Micro, and newer companies focused on endpoint security such as Cylance, Light Cyber, Outlier Security and Tanium. That’s just a minute sampling; the field is crowded, and the competitors are coming up with varying ways to handle these issues.

The value of endpoint protection platforms is that they can identify specific attacks and speed the response to them once they are detected. They do this by gathering information about communications that go on among endpoints and other devices on the network, as well as changes made to the endpoint itself that may indicate compromise. The database of this endpoint telemetry then becomes a forensic tool for investigating attacks, mapping how they unfolded, discovering what devices need remediation and perhaps predicting what threat might arise next.

Agent or not?
The main aversion to agents in general is that they are one more piece of software to deploy, manage and update. In the case of next-gen endpoint protection, they do provide vast amounts of otherwise uncollectable data about endpoints, but that can also be a downside.

Endpoint agents gather so much information that it may be difficult to sort out the attacks from the background noise, so it’s important that the agents are backed by an analysis engine that can handle the volume of data being thrown at it, says Gartner analyst Lawrence Pingree. The amount of data generated varies depending on the agent and the type of endpoint.
security questions

Pingree and the NSS researchers
Without an agent, endpoint protection platforms can still gather valuable data about what machines are doing by tapping into switch and router data and monitoring Windows Network Services and Windows Management Instrumentation. This information can include who’s logged in to the machine, what the user does, patch levels, whether other security agents are running, whether USB devices are attached, what processes are running, etc.

Analysis can reveal whether devices are creating connections outside what they would be expected to make, a possible sign of lateral movement by attackers seeking ways to victimize other machines and escalate privileges.

Agents can mean one more management console, which means more complexity and potentially more cost, says Randy Abrams, a research director at NSS Labs who researches next-gen EPP platforms. “At some point that’s going to be a difference in head count,” he says, with more staff being required to handle all the consoles and that translates into more cost.

It’s also a matter of compatibility, says Rob Ayoub, also a research director at NSS Labs. “How do you insure any two agents – of McAfee and Bromium or Cylance – work together and who do you call if they don’t?”

Security of the management and administration of these platforms should be reviewed as well, Pingree says, to minimize insider threat to the platforms themselves. Businesses should look for EPP with tools that allow different levels of access for IT staff performing different roles. It would be useful, for example, if to authorize limited access for admins while incident-response engineers get greater access, he says.

Analysis engines
Analysis is essential but also complex, so much so that it can be a standalone service such as the one offered by Red Canary. Rather than gather endpoint data with its own agents, it employs sensors provided by Bit9+CarbonBlack. Red Canary supplements that data with threat intelligence gathered from a variety of other commercial security firms, analyzes it all and generates alerts about intrusion it finds on customers’ networks.

The analysis engine flags potential trouble, but human analysts check out flagged events to verify they are real threats. This helps corporate security analysts by cutting down on the number of alerts they have to respond to.

Startup Barkly says it’s working on an endpoint agent that locally analyzes what each endpoint is up to and automatically blocks malicious activity. It also notifies admins about actions it takes.

These engines need to be tied into larger threat-intelligence sources that characterize attacks by how they unfold, revealing activity that leads to a breach without using code that can be tagged as malware, says Abrams.

Most of what is known about endpoint detection and response tools is what the people who make them say they can do. So if possible businesses should run trials to determine first-hand features and effectiveness before buying. “The downside of emerging technologies is there’s very little on the testing side,” Pingree says.

Remediation
Endpoint detection tools gather an enormous amount of data that can be used tactically to stop attacks but also to support forensic investigations into how incursions progressed to the point of becoming exploits. This can help identify what devices need remediation, and some vendors are looking to automating that process.

For example Triumfant offers Resolution Manager that can restore endpoints to known good states after detecting malicious activity. Other vendors offer remediation features or say they are working on them, but the trend is toward using the same platforms to fix the problems they find.

The problem businesses face is that endpoints remain vulnerable despite the efforts of traditional endpoint security, which has evolved into security suites – anti-virus, anti-malware, intrusion detection, intrusion prevention, etc. While progressively working on the problem it leads to another problem.

“They have actually just added more products to the endpoint portfolio, thus taking us full circle back to bloated end points,” says Larry Whiteside, the CSO for the Lower Colorado River Authority. “Luckily, memory and disk speed (SSD) have kept that bulk from crippling endpoint performance.”

As a result he is looking at next-generation endpoint protection from SentinelOne. Security based on what endpoints are doing as opposed to seeking signatures of known malicious behavior is an improvement over traditional endpoint protection, he says. “Not saying signatures are totally bad, but that being a primary or only decision point is horrible. Therefore, adding behavior based detection capabilities adds value.”

So much value that he is more concerned about that than he is about whether there is a hard return on investment. “The reality is that I am more concerned about detection than I am ROI, so I may not even perform that analysis. I can say that getting into a next-gen at the right stage can be beneficial to an organization,” he says.

Anti-virus replacement?
So far vendors of next-generation endpoint protection have steered clear of claiming their products can replace anti-virus software, despite impressive test results. But that could be changing. Within a year, regulatory hurdles that these vendors face may disappear, says George Kurtz, CEO of CrowdStrike.

Within a year rules that require use of anti-virus in order to pass compliance tests will allow next-generation endpoint protection as well, he says. “That’s really our goal,” he says. “From the beginning we thought we could do that.”

He says everyone is focused on malware, but that represents just 40% of attacks. The rest he calls “malware-less intrusions” such as insider theft where attackers with credentials steal information without use of malware.

Until regulations are rewritten, it’s important for regulated businesses to meet the anti-virus requirement, Abrams says, even though other platforms may offer better protection. “It some cases that’s actually more important than the ability to protect because you won’t be protected from legal liabilities.”

Meanwhile having overlapping anti-virus and next-gen endpoint protection means larger enterprises are likely customers for now vs. smaller businesses with fewer resources, he says. But even for smaller businesses the cost may be worth it.

“What do they have to lose and how much does it cost to lose this information vs how much does it cost to protect it?” Abrams says. “


 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

Top 10 job boards on Twitter

Written by admin
July 16th, 2015

Top 10 job boards on Twitter

Celebrities, politicians and companies all have a Twitter account today, so why not job boards? Here are 10 job boards that are using Twitter better than the competition.

Top job boards on Twitter
Twitter isn’t just for celebrities, companies and parody accounts. It’s now an outlet for job boards as well. Turning to Twitter in your job search might not feel natural, but Twitter is becoming a popular recruitment tool. As social media becomes a mainstay of our everyday lives, it’s also become a part of your job search as well.

Engagement Labs, creators of eValue, which rates how well companies use social media, rates successful uses of social media based on likes, follows and overall audience engagement. Here are 10 social job boards using Twitter better than the competition.

#1 Twitter: Monster
Monster’s main Twitter handle, where the company shares both unique and shared content, has over 150,000 followers. It’s eValue score was “20 percent higher than their nearest competitor,” according to Engagement Labs, along with the highest impact score, indicating their content is reaching a large — and interested — audience.

#2 Twitter: CareerOneStop
CareerOneStop is another socially successful government website, coming in second for its use of Twitter and its ability to engage with its audience of over 5,000 followers. The website is sponsored by the U.S. Department of Labor and offers a number of helpful resources for job seekers in every industry.

#3 Twitter: ZipRecruiter
ZipRecruiter may have a modest following of around 4,000 on Twitter, but the company has created a social outlet for its services and its followers are engaged in the experience. ZipRecruiter posts a number of job-seeker-related content, updates about the company, industry updates and, of course, job listings. The site pulls in jobs from other well-known job boards including Monster, Glassdoor and SimplyHired, just to name a few.

#4 Twitter: AOL Jobs
AOL has come a long way since it dominated the Internet back in the 90s, but the company has since moved on from dial-up tones and mailing out its latest software. The Internet company has now extended its reach into the job market, with AOL Jobs, and it’s getting the right feedback on Twitter to put it at number 4 on the list of job boards using Twitter. With over 13,000 followers, AOL Jobs’ twitter feed mostly features original – and interesting — job-seeker focused content that will draw you into the homepage for AOL Jobs.

#5 Twitter: FlexJobs
FlexJobs helps you find jobs that aren’t your typical 9-to-5 office roles. It includes remote opportunities, freelance work and other less conventional career listings on its jobs board. FlexJob’s Twitter account, with more than 8,000 followers, houses content related to flexible job schedules, remote work and telecommuting. Its number 5 on the list of companies with the most powerful social job boards, so if you’re looking for remote, part-time or freelance work, it might be the right account to follow.

#6 Twitter: CareerBuilder
CareerBuilder is a well-known career site and jobs board, but it also dominates the top 10 list for Twitter. At number 6, CareerBuilder uses its Twitter account to connect with nearly 150,000 followers and share content related to job searching, employment, recent college graduates and, of course, job postings.

#7 Twitter: Mediabistro
Mediabistro is more than a jobs board. The website also includes educational programs, articles and industry events in addition to job listings. Its Twitter account, with over 170,000 followers is no different. The social account features job listings, information for job seekers, tips and strategies for finding the right job and more. Mediabistro also poses questions to its followers as well as funny hashtags and memes, going the extra mile to connect with followers.

#8 Twitter: Glassdoor
Glassdoor was a pioneer for job seekers, bringing them reliable salary data and reviews from current and former employees a large number of companies. It’s now channeling its know-how and data into a well-rounded Twitter account with over 80,000 followers. The company features original content, shared articles and job search statistics on Twitter, making it another great option to follow if you are in the market for a new job.

#9 Twitter: Snagajob
Snagajob isn’t successful only on Facebook, it also makes the top 10 list for Twitter. It’s clear that Snagajob is trying to connect with its millennial followers, with its use of emojis and references to pop culture, and it seems to be working. The account has over 14,000 followers and scored high on the list of companies using Twitter effectively.

#10 Twitter: TheLadders
Similar to other jobs boards, TheLadders has a wealth of job-seeker related content on its Twitter account. With over 60,000 followers, TheLadders shares and posts content from its own site, articles from other sources and networking tips. It’s focused on connecting with driven job seekers who want to push their career onward and upward, and its Twitter efforts seem to be doing the trick.


 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Even as PC business contracts for 14th straight quarter, Mac sales surge 16%

Skittish about the impact of Windows 10, including the free upgrade-from-Windows-7-and-8.1 offer, computer makers drew down inventories and sent PC shipments plummeting in the June quarter, IDC said today.

The quarter was among the worst ever for personal computers, according to the research firm, which estimated the year-over-year contraction at 11.8%. That decline was bested only twice before in the two decades that IDC has tracked shipments: in early 2013, when the January quarter was off 13% and the September quarter of 2001, which posted a decline of 12%.

OEMs (original equipment manufacturers) shipped approximately 66 million systems in the three months that ended June 30, IDC said, down from the 75 million during the same stretch in 2014.

The dramatic downturn was due to several factors, said IDC analyst Loren Loverde, who runs IDC’s PC forecast team, including a tough comparative from last year as enterprises scrambled to replace obsolete Windows XP machines. The 2001 operating system was retired by Microsoft in April 2014.

But Windows 10 also played a part, Loverde contended. “We’ve heard from various parties, including ODMs [original device manufacturers], component makers and distributors, that they’ve reduced inventory as Windows 10 approached,” he said.

Although the industry is more bullish about Windows 10 than its predecessor, Windows 8, that’s not been reflected in larger shipments simply because OEMs aren’t sure how the new OS will play out in the coming quarter or two. To safeguard against overstocking the channel, and to some extent preparing for the launch of Windows 10, OEMs played it conservative and tightened inventories by building fewer PCs.

“Although it’s very difficult to quantify, I’d say that this inventory reduction is a little bit more dramatic than before Windows 8,” said Loverde.

Three years ago, inventories surged as PC makers cranked out devices — 85 million in the second quarter of 2012, 88 million in the third — figuring that Windows 8 was going to be a big hit and juice sales. That didn’t happen.

“There were a lot of [retail and distribution] customers buying additional inventory and promoting Windows 8,” Loverde said. “The [negative] impact on inventory is more substantial this time, and everyone is taking a wait-and-see approach, thinking that they’ll make decisions in the second half of the year.”

Some of the nervousness on the part of computer makers revolves around the upgrade offer Microsoft will extend to all consumers and many businesses with existing PCs running Windows 7 or Windows 8.1. Starting July 29, Microsoft will give those customers a free upgrade to Windows 10. The deal will expire a year later, on July 29, 2016.

Because Microsoft has never before offered a free upgrade of this magnitude, it’s uncharted territory for Windows OEMs. A host of unknowns, ranging from whether the free upgrade will keep significant numbers on old hardware to the eventual reaction to the new OS, have made computer makers edgy about committing to fully packing the channel.

“It’s even riskier when the market is declining,” Loverde said of carrying large inventories.

And the PC business has been in decline, and will continue to contract.

IDC has held to its prediction that for 2015, global PC shipments will be down 6.2% from last year’s 308 million, or to around 289 million. (That may change to an even more depressing number; Loverde said IDC had not yet adjusted the figure to account for the worse-than-expected second quarter.) In 2016, the industry will shrink by another 2%.

The brightest spot in the quarter’s forecast was again Apple, which IDC had in the OEM fourth spot with shipments of 5.1 million Macs, a year-over-year jump of 16%. Other manufacturers in the top five — Lenovo, HP, Dell and Acer — were pegged with declines of 8%, 10%, 9% and 27%, respectively.

“Apple’s a pretty unique company,” said Loverde. “They’ve cultivated their market position and product portfolio, and, of course, their positioning is towards more affluent buyers who are not as price sensitive.”

Loverde was convinced that some of the Mac’s strong sales in the June quarter benefited from uncertainties about Windows 10 on the part of consumers.

Unclear, said Loverde, is how the Mac will fare if, as IDC and others believe, Apple introduces a larger iPad later this year, a tablet better geared to the productivity chores typically handled by personal computers.

“I think there will be some impact on Mac shipments, but Apple is always willing to cannibalize its own products,” he said. “But the upside on tablets [generated by a larger iPad] and as a brand is bigger than the risk.”

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

SDN will support IoT by centralizing control, abstracting network devices, and providing flexible, dynamic, automated reconfiguration of the network

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

Organizations are excited about the business value of the data that will be generated by the Internet of Things (IoT). But there’s less discussion about how to manage the devices that will make up the network, secure the data they generate and analyze it quickly enough to deliver the insights businesses need.

Software defined networking (SDN) can help meet these needs. By virtualizing network components and services, they can rapidly and automatically reconfigure network devices, reroute traffic and apply authentication and access rules. All this can help speed and secure data delivery, and improve network management, for even the most remote devices.

SDN enables the radical simplification of network provisioning with predefined policies for plug-and-play set-up of IoT devices, automatic detection and remediation of security threats, and the provisioning of the edge computing and analytics environments that turn data into insights.

Consider these two IoT use cases:
* Data from sensors within blowout preventers can help oil well operators save millions of dollars a year in unplanned downtime. These massive data flows, ranging from pressure readings to valve positions, are now often sent from remote locations to central servers over satellite links. This not only increases the cost of data transmission but delays its receipt and analysis. This latency can be critical – or even deadly – when the data is used to control powerful equipment or sensitive industrial processes.

Both these problems will intensify as falling prices lead to the deployment of many more sensors, and technical advances allow each sensor to generate much more data. Processing more data at the edge (i.e. near the well) and determining which is worth sending to a central location (what some call Fog or Edge Computing) helps alleviate both these problems. So can the rapid provisioning of network components and services, while real-time application of security rules helps protect proprietary information.

* Data from retail environments, such as from a customer’s smartphone monitoring their location and the products they take pictures of, or in-store sensors monitoring their browsing behavior, can be used to deliver customized offers to encourage an immediate sale. Again, the volume of data and the need for fast analysis and action calls for the rapid provisioning of services and edge data processing, along with rigorous security to ease privacy concerns.

Making such scenarios real requires overcoming unprecedented challenges.
One is the sheer number of devices, which is estimated to reach 50 billion by 2020, with each new device expanding the “attack surface” exposed to hackers. Another is the amount of data moving over this network, with IDC projecting IoT will account for 10% of all data on the planet by 2020.

Then there is the variety of devices that need to be managed and supported. These range from network switches supporting popular management applications and protocols, to legacy SCADA (supervisory control and data acquisition) devices and those that lack the compute and/or memory to support standard authentication or encryption. Finally, there is the need for very rapid, and even real-time, response, especially for applications involving safety (such as hazardous industrial processes) or commerce (such as monitoring of inventory or customer behavior).

Given this complexity and scale, manual network management is simply not feasible. SDN provides the only viable, cost-effective means to manage the IoT, secure the network and the data on it, minimize bandwidth requirements and maximize the performance of the applications and analytics that use its data.

SDN brings three important capabilities to IoT:
Centralization of control through software that has complete knowledge of the network, enabling automated, policy-based control of even massive, complex networks. Given the huge potential scale of IoT environments, SDN is critical in making them simple to manage.

Abstraction of the details of the many devices and protocols in the network, allowing IoT applications to access data, enable analytics and control the devices, and add new sensors and network control devices, without exposing the details of the underlying infrastructure. SDN simplifies the creation, deployment and ongoing management of the IoT devices and the applications that benefit from them.

The flexibility to tune the components within the IoT (and manage where data is stored and analyzed) to continually maximize performance and security as business needs and data flows change. IoT environments are inherently disperse with many end devices and edge computing. As a result, the network is even more critical than in standard application environments. SDN’s ability to dynamically change network behavior based on new traffic patterns, security incidents andpolicy changes will enable IoT environments to deliver on their promise.

For example, through the use ofpredefined policies for plug-and-play set up, SDN allows for the rapid and easy addition of new types of IoT sensors. By abstracting network services from the hardware on which they run, SDN allows automated, policy-based creation of virtual load balancers, quality of service for various classes of traffic, and the provisioning of network resources for peak demands.

The ease of adding and removing resources reduces the cost and risk of IoT experiments by allowing the easy deprovisioning and reuse of the network infrastructure when no longer needed.

SDN will make it easier to find and fight security threats through the improved visibility they provide into network traffic right to the edge of the network. They also make it easy to apply automated policies to redirect suspicious traffic to, for example, a honeynet where it can be safely examined. By making networking management less complex, SDN allows IT to set and enforce more segmented access controls.

SDN can provide a dynamic, intelligent, self-learning layered model of security that provides walls within walls and ensures people can only change the configuration of the devices they’re authorized to “touch.” This is far more useful than the traditional “wall” around the perimeter of the network, which won’t work with the IoT because of its size and the fact the enemy is often inside the firewall, in the form of unauthorized actors updating firmware on unprotected devices.

Finally, by centralizing configuration and management, SDN will allow IT to effectively program the network to make automatic, real-time decisions about traffic flow. They will allow the analysis of not only sensor data, but data about the health of the network, to be analyzed close to the network edge to give IT the information it needs to prevent traffic jams and security risks. The centralized configuration and management of the network, and the abstraction of network devices, also makes it far easier to manage applications that run on the edge of the IoT.

For example, SDN will allow IT to fine-tune data aggregation, so data that is less critical is held at the edge and not transmitted to core systems until it won’t slow critical application traffic. This edge computing can also perform fast, local analysis and speed the results to the network core if the analysis indicates an urgent situation, such as the impending failure of a jet engine.

Prepare Now
IT organizations can become key drivers in capturing the promised business value of IoT through the use of SDNs. But this new world is a major change and will require some planning.

To prepare for the intersection of IoT and SDN, you should start thinking about what policies in areas such as security, Quality of Service (QoS) and data privacy will make sense in the IoT world, and how to structure and implement such policies in a virtualized network.

All companies have policies today, but typically they are implicit – that is – buried in a morass of ACLs and network configurations. SDN will turn this process on its head, allowing IT teams to develop human readable policies that are implemented by the network. IT teams should start understanding how they’ve configured today’s environment so that they can decide what policies should be brought forward.

They should plan now to include edge computing and analytics in their long-term vision of the network. At the same time, they should remember that IoT and SDN are in their early stages, meaning their network and application planners should expect unpredicted changes in, for example, the amounts of data their networks must handle, and the need to dynamically reconfigure them for local rather than centralized processing. The key enablers, again, will be centralization of control, abstraction of network devices and flexible, dynamic automated reconfiguration of the network. Essentially, isolation of network slices to segment the network by proactively pushing policy via a centralized controller to cordon off various types of traffic. Centralized control planes offer the advantages of easy operations and management.

IT teams should also evaluate their network, compute and data needs across the entire IT spectrum, as the IoT will require an end-to-end SDN solution encompassing all manner of devices, not just those from one domain within IT, but across the data center, Wide Area Network (WAN) and access.

Lastly, IT will want to get familiar with app development in edge computing environments, which is a mix of local and centralized processing. As network abstraction to app layer changes and becomes highly programmable, network teams need to invest in resources and training that understand these programming models (e.g. REST) so that they can more easily partner with the app development teams.

IoT will be so big, so varied and so remote that conventional management tools just won’t cut it. Now is the time to start learning how SDN can help you manage this new world and assure the speedy, secure delivery and analysis of the data it will generate.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

SDN will support IoT by centralizing control, abstracting network devices, and providing flexible, dynamic, automated reconfiguration of the network

Organizations are excited about the business value of the data that will be generated by the Internet of Things (IoT). But there’s less discussion about how to manage the devices that will make up the network, secure the data they generate and analyze it quickly enough to deliver the insights businesses need.

Software defined networking (SDN) can help meet these needs. By virtualizing network components and services, they can rapidly and automatically reconfigure network devices, reroute traffic and apply authentication and access rules. All this can help speed and secure data delivery, and improve network management, for even the most remote devices.

SDN enables the radical simplification of network provisioning with predefined policies for plug-and-play set-up of IoT devices, automatic detection and remediation of security threats, and the provisioning of the edge computing and analytics environments that turn data into insights.

Consider these two IoT use cases:
* Data from sensors within blowout preventers can help oil well operators save millions of dollars a year in unplanned downtime. These massive data flows, ranging from pressure readings to valve positions, are now often sent from remote locations to central servers over satellite links. This not only increases the cost of data transmission but delays its receipt and analysis. This latency can be critical – or even deadly – when the data is used to control powerful equipment or sensitive industrial processes.

Both these problems will intensify as falling prices lead to the deployment of many more sensors, and technical advances allow each sensor to generate much more data. Processing more data at the edge (i.e. near the well) and determining which is worth sending to a central location (what some call Fog or Edge Computing) helps alleviate both these problems. So can the rapid provisioning of network components and services, while real-time application of security rules helps protect proprietary information.

* Data from retail environments, such as from a customer’s smartphone monitoring their location and the products they take pictures of, or in-store sensors monitoring their browsing behavior, can be used to deliver customized offers to encourage an immediate sale. Again, the volume of data and the need for fast analysis and action calls for the rapid provisioning of services and edge data processing, along with rigorous security to ease privacy concerns.

Making such scenarios real requires overcoming unprecedented challenges.
One is the sheer number of devices, which is estimated to reach 50 billion by 2020, with each new device expanding the “attack surface” exposed to hackers. Another is the amount of data moving over this network, with IDC projecting IoT will account for 10% of all data on the planet by 2020.

Then there is the variety of devices that need to be managed and supported. These range from network switches supporting popular management applications and protocols, to legacy SCADA (supervisory control and data acquisition) devices and those that lack the compute and/or memory to support standard authentication or encryption. Finally, there is the need for very rapid, and even real-time, response, especially for applications involving safety (such as hazardous industrial processes) or commerce (such as monitoring of inventory or customer behavior).

Given this complexity and scale, manual network management is simply not feasible. SDN provides the only viable, cost-effective means to manage the IoT, secure the network and the data on it, minimize bandwidth requirements and maximize the performance of the applications and analytics that use its data.

SDN brings three important capabilities to IoT:

Centralization of control through software that has complete knowledge of the network, enabling automated, policy-based control of even massive, complex networks. Given the huge potential scale of IoT environments, SDN is critical in making them simple to manage.

Abstraction of the details of the many devices and protocols in the network, allowing IoT applications to access data, enable analytics and control the devices, and add new sensors and network control devices, without exposing the details of the underlying infrastructure. SDN simplifies the creation, deployment and ongoing management of the IoT devices and the applications that benefit from them.

The flexibility to tune the components within the IoT (and manage where data is stored and analyzed) to continually maximize performance and security as business needs and data flows change. IoT environments are inherently disperse with many end devices and edge computing. As a result, the network is even more critical than in standard application environments. SDN’s ability to dynamically change network behavior based on new traffic patterns, security incidents andpolicy changes will enable IoT environments to deliver on their promise.

For example, through the use ofpredefined policies for plug-and-play set up, SDN allows for the rapid and easy addition of new types of IoT sensors. By abstracting network services from the hardware on which they run, SDN allows automated, policy-based creation of virtual load balancers, quality of service for various classes of traffic, and the provisioning of network resources for peak demands.

The ease of adding and removing resources reduces the cost and risk of IoT experiments by allowing the easy deprovisioning and reuse of the network infrastructure when no longer needed.

SDN will make it easier to find and fight security threats through the improved visibility they provide into network traffic right to the edge of the network. They also make it easy to apply automated policies to redirect suspicious traffic to, for example, a honeynet where it can be safely examined. By making networking management less complex, SDN allows IT to set and enforce more segmented access controls.

SDN can provide a dynamic, intelligent, self-learning layered model of security that provides walls within walls and ensures people can only change the configuration of the devices they’re authorized to “touch.” This is far more useful than the traditional “wall” around the perimeter of the network, which won’t work with the IoT because of its size and the fact the enemy is often inside the firewall, in the form of unauthorized actors updating firmware on unprotected devices.

Finally, by centralizing configuration and management, SDN will allow IT to effectively program the network to make automatic, real-time decisions about traffic flow. They will allow the analysis of not only sensor data, but data about the health of the network, to be analyzed close to the network edge to give IT the information it needs to prevent traffic jams and security risks. The centralized configuration and management of the network, and the abstraction of network devices, also makes it far easier to manage applications that run on the edge of the IoT.

For example, SDN will allow IT to fine-tune data aggregation, so data that is less critical is held at the edge and not transmitted to core systems until it won’t slow critical application traffic. This edge computing can also perform fast, local analysis and speed the results to the network core if the analysis indicates an urgent situation, such as the impending failure of a jet engine.

Prepare Now

IT organizations can become key drivers in capturing the promised business value of IoT through the use of SDNs. But this new world is a major change and will require some planning.

To prepare for the intersection of IoT and SDN, you should start thinking about what policies in areas such as security, Quality of Service (QoS) and data privacy will make sense in the IoT world, and how to structure and implement such policies in a virtualized network.

All companies have policies today, but typically they are implicit – that is – buried in a morass of ACLs and network configurations. SDN will turn this process on its head, allowing IT teams to develop human readable policies that are implemented by the network. IT teams should start understanding how they’ve configured today’s environment so that they can decide what policies should be brought forward.

They should plan now to include edge computing and analytics in their long-term vision of the network. At the same time, they should remember that IoT and SDN are in their early stages, meaning their network and application planners should expect unpredicted changes in, for example, the amounts of data their networks must handle, and the need to dynamically reconfigure them for local rather than centralized processing. The key enablers, again, will be centralization of control, abstraction of network devices and flexible, dynamic automated reconfiguration of the network. Essentially, isolation of network slices to segment the network by proactively pushing policy via a centralized controller to cordon off various types of traffic. Centralized control planes offer the advantages of easy operations and management.

IT teams should also evaluate their network, compute and data needs across the entire IT spectrum, as the IoT will require an end-to-end SDN solution encompassing all manner of devices, not just those from one domain within IT, but across the data center, Wide Area Network (WAN) and access.

Lastly, IT will want to get familiar with app development in edge computing environments, which is a mix of local and centralized processing. As network abstraction to app layer changes and becomes highly programmable, network teams need to invest in resources and training that understand these programming models (e.g. REST) so that they can more easily partner with the app development teams.

IoT will be so big, so varied and so remote that conventional management tools just won’t cut it. Now is the time to start learning how SDN can help you manage this new world and assure the speedy, secure delivery and analysis of the data it will generate.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

In the earliest days of Amazon.com SQL databases weren’t cutting it, so the company created DynamoDB and in doing so helped usher in the NoSQL market

Behind every great ecommerce website is a database, and in the early 2000s Amazon.com’s database was not keeping up with the company’s business.

Part of the problem was that Amazon didn’t have just one database – it relied on a series of them, each with its own responsibility. As the company headed toward becoming a $10 billion business, the number and size of its SQL databases exploded and managing them became more challenging. By the 2004 holiday shopping rush, outages became more common, caused in large part by overloaded SQL databases.

Something needed to change.
But instead of looking for a solution outside the company, Amazon developed its own database management system. It was a whole new kind of database, one that threw out the rules of traditional SQL varieties and was able to scale up and up and up. In 2007 Amazon shared its findings with the world: CTO Werner Vogels and his team released a paper titled “Dynamo – Amazon’s highly available key value store.” Some credit it with being the moment that the NoSQL database market was born.

The problem with SQL
The relational databases that have been around for decades and most commonly use the SQL programming language are ideal for organizing data in neat tables and running queries against them. Their success is undisputed: Gartner estimates the SQL database market to be $30 billion.

But in the early to mid-2000s, companies like Amazon, Yahoo and Google had data demands that SQL databases just didn’t address well. (To throw a bit of computer science at you, the CAP theorem states that it’s impossible for a distributed system, such as a big database, to have consistency, availability and fault tolerance. SQL databases prioritize consistency over speed and flexibility, which makes them great for managing core enterprise data such as financial transactions, but not other types of jobs as well.)

Take Amazon’s online shopping cart service, for example. Customers browse the ecommerce website and put something in their virtual shopping cart where it is saved and potentially purchased later. Amazon needs the data in the shopping cart to always be available to the customer; lost shopping cart data is a lost sale. But, it doesn’t necessarily need every node of the database all around the world to have the most up-to-date shopping cart information for every customer. A SQL/relational system would spend enormous compute resources to make data consistent across the distributed system, instead of ensuring the information is always available and ready to be served to customers.

One of the fundamental tenets of Amazon’s Dynamo, and NoSQL databases in general, is that they sacrifice data consistency for availability. Amazon’s priority is to maintain shopping cart data and to have it served to customers very quickly. Plus, the system has to be able to scale to serve Amazon’s fast-growing demand. Dynamo solves all of these problems: It backs up data across nodes, and can handle tremendous load while maintaining fast and dependable performance.

“It was one of the first NoSQL databases,” explains Khawaja Shams, head of engineering at Amazon DynamoDB. “We traded off consistency and very rigid

querying semantics for predictable performance, durability and scale – those are the things Dynamo was super good at.”

DynamoDB: A database in the cloud
Dynamo fixed many of Amazon’s problems that SQL databases could not. But throughout the mid-to-late 2000s, it still wasn’t perfect. Dynamo boasted the functionality that Amazon engineers needed, but required substantial resources to install and manage.

The introduction of DynamoDB in 2012 proved to be a major upgrade though. The hosted version of the database Amazon uses internally lives in Amazon Web Services’ IaaS cloud and is fully managed. Amazon engineers and AWS customers don’t provision a database or manage storage of the data. All they do is request the throughput they need from DynamoDB. Customers pay $0.0065 per hour for about 36,000 writes to the database (meaning the amount of data imported to the database per hour) plus $0.25 per GB of data stored in the system per month. If the application needs more capacity, then with a few clicks the database spreads the workload over more nodes.

AWS is notoriously opaque about how DynamoDB and many of its other Infrastructure-as-service products run under the covers, but this promotional video reveals that the service employs solid state drives and notes that when customers use DynamoDB, their data is spread across availability zones/data centers to ensure availability.

Forrester principal analyst Noel Yuhanna calls it a “pretty powerful” database and considers it one of the top NoSQL offerings, especially for key-value store use cases.

DynamoDB has grown significantly since its launch. While AWS will not release customer figures, company engineer James Hamilton said in November that DynamoDB has grown 3x in requests it processes annually and 4x in the amount of data it stores compared to the year prior. Even with that massive scale and growth, DynamoDB has consistently returned queries in three to four milliseconds.

Below is a video demonstrating DynamoDB’s remarkably consistent performance even as more stress is put on the system.

To see a demo of DynamoDB, jump to the 16:47 mark in the video.
Feature-wise, DynamoDB has grown, too. NoSQL databases are generally broken into a handful of categories: Key-value store databases organize information with a key and a value; document databases allow full documents to be searched against; while graph databases track connections between data. DynamoDB originally started as a key-value database, but last year AWS expanded itto become a document database by supporting JSON formatted files. AWS last year also added Global Secondary Indexes to DynamoDB, which allow users to have copies of their database, typically one for production and another for querying, analytics or testing.

NoSQL’s use case and vendor landscape
The fundamental advantage of NoSQL databases is their ability to scale and have flexible schema, meaning users can easily change how data is structured and run multiple queries against it. Many new web-based applications, such as social, mobile and gaming-centric ones, are being built using NoSQL databases.

While Amazon may have helped jumpstart the NoSQL market, it is now one of dozens of vendors attempting to cash in on it. Nick Heudecker, a Gartner researcher, stresses that even though NoSQL has captured the attention of many developers, it is still a relatively young technology. He estimates revenues of NoSQL products to not even surpass half a billion dollars annually (that’s not an official Gartner estimate). Heudecker says the majority of his enterprise client inquiries are still around SQL databases.

NoSQL competitors MongoDB, MarkLogic, Couchbase and Datastax have strong standings in the market as well and some seem to have greater traction among enterprise customers compared to DynamoDB, Huedecker says.

Living in the cloud

What’s holding DynamoDB back in the enterprise market? For one, it has no on-premises version – it can only be used in AWS’s cloud. Some users just aren’t comfortable using a cloud-based database, Heudecker says. DynamoDB competitors offer users the opportunity to run databases on their own premises behind their own firewall.

Khawaja Shams, director of engineering for DynamoDB says when the company created Dynamo it had to throw out the old rules of SQL databases.

Shams, AWS’s DynamoDB engineering head, says because the technology is hosted in the cloud, users don’t have to worry about configuring or provisioning any hardware. They just use the service and scale it up or down based on demand, while paying only for storage and throughput, he says.

For security-sensitive customers, there are opportunities to encrypt data as DynamoDB stores it. Plus, DynamoDB is integrated with AWS – the market’s leading IaaS platform (according to Gartner’s Magic Quadrant report), which supports a variety of tools, including other relational databases such as Aurora and RDS.

Adroll rolls with AWS DynamoDB

Marketing platform provider Adroll, which serves more than 20,000 customers in 150 countries, is among those organizations comfortable using the cloud-based DynamoDB. Basically, if an ecommerce site visitor browses a product page but does not buy the item, AdRoll bids on ad space on another site the user visits to show the product they were previously considering. It’s an effective method for getting people to buy products they were considering.

It’s really complicated for AdRoll to figure out which ads to serve to which users though. Even more complicated is that AdRoll needs to decide in about the time it takes for a webpage to load whether it will bid on an ad spot and which ad to place. That’s the job of CTO Valentino Volonghi –he has about 100 milliseconds to play with. Most of that time is gobbled up by network latency, so needless to say AdRoll requires a reliably fast platform. It also needs huge scale: AdRoll considers more than 60 billion ad impressions every day.

AdRoll uses DynamoDB and Amazon’s Simple Storage Service (S3) to sock away data about customers and help its algorithm decide which ads to buy for customers. In 2013, AdRoll had 125 billion items in DynamoDB; it’s now up to half a trillion. It makes 1 million requests to the system each second, and the data is returned in less than 5 milliseconds — every time. AdRoll has another 17 million files uploaded into Amazon S3, taking up more than 1.5 petabytes of space.

AdRoll didn’t have to build a global network of data centers to power its product, thanks in large part to using DynamoDB.

“We haven’t spent a single engineer to operate this system,” Volonghi says. “It’s actually technically fun to operate a database at this massive scale.”

Not every company is going to have the needs of Amazon.com’s ecommerce site or AdRoll’s real-time bidding platform. But many are struggling to achieve greater scale without major capital investments. The cloud makes that possible, and DynamoDB is a prime example.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

The interim CEO would have to leave his post at Square to take over at Twitter

A week and a half after Dick Costolo announced that he would be stepping down from the CEO role at Twitter, the company’s board of directors has sent a shot across the bow of one of the expected front-runner candidates to take the social network’s top job.

The social micro-blogging company’s search committee will only consider CEO candidates “who are in a position to make a full-time commitment to Twitter,” the board said.That would seem to rule out Jack Dorsey, the company’s co-founder who currently works as the CEO of Square and will be filling in as interim CEO of Twitter.

Dorsey has said that he plans to remain at the helm of the payment processing company he co-founded, but hasn’t explicitly ruled out a bid for a permanent berth in Twitter’s top job. Now the Twitter board has made it clear that he would have to depart Square if he wants to run Twitter. That’s a rough proposition for Dorsey, especially since Square is reportedly planning to go public this year.

As for the overall search process, Twitter’s search committee has contracted with executive search firm Spencer Stuart to evaluate internal and external candidates for the job. The board hasn’t set a firm time frame for its hiring of a new CEO, saying that there’s a “sense of urgency” to the process but that it will take its time to find the right person for the job.

Whoever steps into the top spot at Twitter will have to contend with increased pressure on the company from Wall Street. Investors have been disappointed by Twitter’s revenue and user growth in recent quarters.


 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Machine intelligence can be used to police networks and fill gaps where the available resources and capabilities of human intelligence are clearly falling short

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

Humans are clearly incapable of monitoring and identifying every threat on today’s vast and complex networks using traditional security tools. We need to enhance human capabilities by augmenting them with machine intelligence. Mixing man and machine – in some ways, similar to what OmniCorp did with RoboCop – can heighten our ability to identify and stop a threat before it’s too late.

The “dumb” tools that organizations rely on today are simply ineffective. There are two consistent, yet still surprising things that make this ineptitude fairly apparent. The first is the amount of time hackers have free reign within a system before being detected: eight months at Premera and P.F. Chang’s, six months at Nieman Marcus, five months at Home Depot, and the list goes on.

The second surprise is the response. Everyone usually looks backwards, trying to figure out how the external actors got in. Finding the proverbial leak and plugging it is obviously important, but this approach only treats a symptom instead of curing the disease.

The disease, in this case, is the growing faction of hackers that are getting so good at what they do they can infiltrate a network and roam around freely, accessing more files and data than even most internal employees have access to. If it took months for Premera, Sony, Target and others to detect these bad actors in their networks and begin to patch the holes that let them in, how can they be sure that another group didn’t find another hole? How do they know other groups aren’t pilfering data right now? Today, they can’t know for sure.

The typical response
Until recently, companies have really only had one option as a response to rising threats, a response that most organizations still employ. They re-harden systems, ratchet-up firewall and IDS/IPS rules and thresholds, and put stricter web proxy and VPN policies in place. But by doing this they drown their incident response teams in alerts.

Tightening policies and adding to the number of scenarios that will raise a red flag just makes the job more difficult for security teams that are already stretched thin. This causes thousands of false positives every day, making it physically impossible to investigate every one. As recent high profile attacks have proven, the deluge of alerts is helping malicious activity slip through the cracks because, even when it is “caught,” nothing is being done about it.

In addition, clamping down on security rules and procedures just wastes everyone’s time. By design, tighter policies will restrict access to data, and in many cases, that data is what employees need to do their jobs well. Employees and departments will start asking for the tools and information they need, wasting precious time for them and the IT/security teams that have to vet every request.

Putting RoboCop on the case
Machine intelligence can be used to police massive networks and help fill gaps where the available resources and capabilities of human intelligence are clearly falling short. It’s a bit like letting RoboCop police the streets, but in this case the main armament is statistical algorithms. More specifically, statistics can be used to identify abnormal and potentially malicious activity as it occurs.

According to Dave Shackleford, an analyst at SANS Institute and author of its 2014 Analytics and Intelligence Survey, “one of the biggest challenges security organizations face is lack of visibility into what’s happening in the environment.” The survey of 350 IT professionals asked why they have difficulty identifying threats and a top response was their inability to understand and baseline “normal behavior.” It’s something that humans just can’t do in complex environments, and since we’re not able to distinguish normal behavior, we can’t see abnormal behavior.

Instead of relying on humans looking at graphs on big screen monitors, or human-defined rules and thresholds to raise flags, machines can learn what normal behavior looks like, adjusting in real time and becoming smarter as they processes more information. What’s more, machines possess the speed required to process the massive amount of information that networks create, and they can do it in near-real time. Some networks process terabytes of data every second, while humans, on the other hand, can process no more than 60 bits per second.

Putting aside the need for speed and capacity, a larger issue with the traditional way of monitoring for security issues is rules are dumb. That’s not just name calling either, they’re literally dumb. Humans set rules that tell the machine how to act and what to do – the speed and processing capacity is irrelevant. While rule-based monitoring systems can be very complex, they’re still built on a basic “if this, then do that” formula. Enabling machines to think for themselves and feed better data and insight to the humans that rely on them is what will really improve security.

It’s almost absurd to not have a layer of security that thinks for itself. Imagine in the physical world if someone was crossing the border every day with a wheelbarrow full of dirt and the customs agents, being diligent at their jobs and following the rules, were sifting through that dirt day after day, never finding what they thought they were looking for. Even though that same person repeatedly crosses the border with a wheelbarrow full of dirt, no one ever thinks to look at the wheelbarrow. If they had, they would have quickly learned he’s been stealing wheelbarrows the whole time!

Just because no one told the customs agents to look for stolen wheelbarrows doesn’t make it OK, but as they say, hindsight is 20/20. In the digital world, we don’t have to rely on hindsight anymore, especially now that we have the power to put machine intelligence to work and recognize anomalies that could be occurring right under our noses. In order for cyber-security to be effective today, it needs at least a basic level of intelligence. Machines that learn on their own and detect anomalous activity can find the “wheelbarrow thief” that might be slowly syphoning data, even if you don’t specifically know that you’re looking for him.

Anomaly detection is among the first technology categories where machine learning is being put to use to enhance network and application security. It’s a form of advanced security analytics, which is a term that’s used quite frequently. However, there are a few requirements this type of technology must meet to truly be considered “advanced.” It must be easily deployed to operate continuously, against a broad array of data types and sources, and at huge data scales to produce high fidelity insights so as not to further add to the alert blindness already confronting security teams.

Leading analysts agree that machine learning will soon be a “need to have” in order to protect a network. In a Nov. 2014 Gartner report titled, “Add New Performance Metrics to Manage Machine-Learning-Enabled Systems,” analyst Will Cappelli directly states, “machine learning functionality will, over the next five years, gradually become pervasive and, in the process, fundamentally modify system performance and cost characteristics.”

While machine learning is certainly not a silver bullet that will solve all security challenges, there’s no doubt it will provide better information to help humans make better decisions. Let’s stop asking people to do the impossible and let machine intelligence step in to help get the job done.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

While businesses plan to increase IT hiring in 2015, it may be easier said than done, especially when it comes to hiring software developers.

The good news is that more businesses are planning to boost their IT hiring in 2015. The bad news? Many are struggling to find talent to fill vacant or newly created roles, especially for software developers and data analytics pros, according to a recent survey from HackerRank, which matches IT talent with hiring companies using custom coding challenges.

In a survey of current and potential customers performed in March, HackerRank asked 1,300 hiring managers about their hiring outlook for the coming year, their hiring practices and the challenges they faced in filling open positions. Of those who responded to the survey, 76 percent say they planned to fill more technical roles in the remainder of 2015 than they did in 2014.
Theory vs. practice

But intending to fill open positions and actually filling them are two different things, as the survey results show. While 94 percent of respondents to the survey say they’re hiring Java developers and 68 percent are hiring for user interface/user experience (UI/UX) designers, 41 percent also claim these roles are difficult to fill.

“That number was the most surprising when we looked at the results. We knew it was going to be a significant percentage, but it seems customers are really struggling to fill these software development roles,” says Vivek Ravinskar, co-founder and CEO of HackerRank.
Java continues to dominate

The survey also revealed that Java continues to be the dominant language sought by hiring managers and recruiters. Of the survey respondents, 69 percent say Java is the most important skill candidates can have.

“Many of our customers are involved in Web-based business or in developing apps. And Java is instrumental for both of these business pursuits — we absolutely expected to hear this from the survey, and we weren’t surprised,” says Ravinskar.
What makes these positions so difficult to fill?

Part of the problem may lie with candidates’ perceptions of a company’s brand, says Tejal Parekh, HackerRank’s vice president of marketing. “We work with a lot of customers in areas that aren’t typically thought of as technology hotspots. For instance, in the finance sector we have customers facing a dearth of IT talent; they’re all innovative companies with a strong technology focus, but candidates don’t see them as such. They want to go to Facebook or Amazon,” says Parekh.

Another challenge lies with the expectations hiring companies have of their candidate pool, says Ravinskar. “There’s also an unconscious bias issue with customers who sometimes limit themselves by not looking outside the traditional IT talent pool. They’re only considering white, male talent from specific schools or specific geographic areas,” says Ravinskar.
Up the ante

As demand for IT talent increases, so do IT salaries. According to the survey, 67 percent of hiring managers say that salaries for technical positions have increased between 2014 and 2015 while 32 percent say they have stayed the same. Overall, HackerRank’s survey highlights the great opportunities available for software development talent and for the companies vying to hire them.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Microsoft released eight security bulletins, two rated critical, but four address remote code execution vulnerabilities that an attacker could exploit to take control of a victim’s machine.

For June 2015 “Update Tuesday,” Microsoft released 8 security bulletins; only two the security updates are rated critical for resolving remote code execution (RCE) flaws, but two patches rated important also address RCE vulnerabilities.

Rated as Critical
MS15-056 is a cumulative security update for Internet Explorer, which fixes 24 vulnerabilities. Qualys CTO Wolfgang Kandek added, “This includes 20 critical flaws that can lead to RCE which an attacker would trigger through a malicious webpage. All versions of IE and Windows are affected. Patch this first and fast.”

Microsoft said the patch resolves vulnerabilities by “preventing browser histories from being accessed by a malicious site; adding additional permission validations to Internet Explorer; and modifying how Internet Explorer handles objects in memory.”

MS15-057 fixes a hole in Windows that could allow remote code execution if Windows Media Player opens specially crafted media content that is hosted on a malicious site. An attacker could exploit this vulnerability to “take complete control of an affected system remotely.”

Rated as Important
MS15-058 is not listed other than a placeholder, but MS15-059 and MS15-060 both address remote code execution flaws.

MS15-059 addresses RCE vulnerabilities in Microsoft Office. Although it’s rated important for Microsoft Office 2010 and 2013, Microsoft Office Compatibility Pack Service Pack 3 and Microsoft Office 2013 RT, Kandek said it should be your second patching priority. If an attacker could convince a user to open a malicious file with Word or any other Office tool, then he or she could take control of a user’s machine. “The fact that one can achieve RCE, plus the ease with which an attacker can convince the target to open an attached file through social engineering, make this a high-risk vulnerability.”

MS15-060 resolves a vulnerability in Microsoft Windows “common controls.” The vulnerability “could allow remote code execution if a user clicks a specially crafted link, or a link to specially crafted content, and then invokes F12 Developer Tools in Internet Explorer.” Kandek explained, “MS15-060 is a vulnerability in the common controls of Windows which is accessible through Internet Explorer Developer Menu. An attack needs to trigger this menu to be successful. Turning off developer mode in Internet Explorer (why is it on by default?) is a listed work-around and is a good defense in depth measure that you should take a look at for your machines.”

The last four patches Microsoft issued address elevation of privilege vulnerabilities.

MS15-061 resolves bugs in Microsoft Windows kernel-mode drivers. “The most severe of these vulnerabilities could allow elevation of privilege if an attacker logs on to the system and runs a specially crafted application. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.”

MS15-062 addresses a security hole in Microsoft Active Directory Federal Services. Microsoft said, “The vulnerability could allow elevation of privilege if an attacker submits a specially crafted URL to a target site. Due to the vulnerability, in specific situations specially crafted script is not properly sanitized, which subsequently could lead to an attacker-supplied script being run in the security context of a user who views the malicious content. For cross-site scripting attacks, this vulnerability requires that a user be visiting a compromised site for any malicious action to occur.”

MS15-063 is another patch for Windows kernel that could allow EoP “if an attacker places a malicious .dll file in a local directory on the machine or on a network share. An attacker would then have to wait for a user to run a program that can load a malicious .dll file, resulting in elevation of privilege. However, in all cases an attacker would have no way to force a user to visit such a network share or website.”

MS15-064 resolves vulnerabilities in Microsoft Exchange Server by “modifying how Exchange web applications manage same-origin policy; modifying how Exchange web applications manage user session authentication; and correcting how Exchange web applications sanitize HTML strings.”

It would be wise to patch Adobe Flash while you are at it as four of 13 vulnerabilities patched are rated critical.

Happy patching!


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Google developer advocate Colt McAnlis said that Android apps, almost across the board, are not architected correctly for the best networking performance, during a talk he gave Friday at Google’s I/O developer conference in San Francisco.

“Networking performance is one of the most important things that every one of your apps does wrong,” he told the crowd.

+ ALSO ON NETWORK WORLD: Enterprise tech a no-show at Google I/O + Google hypes Android M, Android Pay, Google Photos at I/O 2015 +

By structuring the way apps access the network inefficiently, McAnlis said, developers are imposing needless costs in terms of performance and battery life – costs for which their users are on the hook.

“Bad networking costs your customers money,” he said. “Every rogue request you make, every out-of-sync packet every two-bit image you request, the user has to pay for. Imagine if I went out and told them that.”

The key to fixing the problem? Use the radio less, and don’t move so much data around, McAnlis said.

One way to do this is batching, he said – architecting an app such that lower-priority data is sent when a device’s networking hardware has been activated by something else, minimizing the amount of time and energy used by the radio.

Pre-fetching data is another important technique for smoothing out network usage by Android apps, he said.

“If you can somehow sense that you’re going to make six or seven requests in the future, don’t wait for the device to go to sleep and then wake it up again – take advantage of the fact that the chip is awake right now, and make the requests right now,” McAnlis said.

He also urged developers to use Google Cloud Messaging, rather than relying on server polling for updates.

“Polling the server is horrible. … It is a waste of the user’s time,” McAnlis said. “Think about this: Every time you poll the server and it comes back with a null packet, telling you that there’s no new data, the user’s paying for that.”


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

6 IT leaders share tips to drive collaboration

Collaboration tools are destined to fail when IT leaders look to solve problems that don’t exist. Here’s how CIOs and IT managers ensure their collaborative platform efforts aren’t futile.

Driving enterprise collaboration is a tall order for CIOs and other IT leaders. The challenges don’t end after a new tool is implemented. If not done the right way for the right reasons, the headaches of deploying a collaboration platform can fester well beyond the technical hurdles.

The first thing to remember is that collaboration tool adoption in the enterprise is a journey, John Abel, senior vice president of IT at Hitachi Data Systems, told CIO.com in an email interview.

“It has to be appealing or provide a value or information where employees find it more difficult to access on other platforms,” Abel says.

Collaboration projects are almost destined to get bogged down when IT leaders pursue solutions to problems that don’t exist. So how can CIOs ensure success?

Empower employees and respect their needs

IT leaders should get insights into the tools employees already use and make sure they are personally invested in the selection process, Brian Lozada, director and CISO at the job placement firm Abacus Group, told CIO.com.

When employees are empowered, they are more likely to use and generate excitement for new collaboration tools internally, Lozada says. Employees ultimately contribute to and determine the success of most collaboration efforts.

It’s also important to acknowledge what success in enterprise collaboration looks like. This is particularly important when employees use collaboration tools to get work done more effectively due to collaboration software, says NetScout’s CIO and Senior Vice President of Services Ken Boyd. “Freedom and flexibility are paramount to how most users want to work today.”

The less training required the better because tools that are more intuitive tend to deliver greater benefits for the organization and user.

“Faster employee engagement of a collaboration tool comes by addressing a pain point in a communication or productivity area, and showing how the tool, with a simple click, provides better or instant access to colleagues and information, shaves seconds or minutes off schedules, or provides greater visibility into a team project,” Boyd says.

Presenting the business benefit of integrating a faster and more widespread adoption of collaboration tools can be a strong motivator for many department heads as well, Boyd says.

User experience is a critical component of any tool and its chances for success, according to Shamlan Siddiqi, vice president of architecture and application development at the systems integrator NTT Data.“Users want something they can quickly deploy with the same immersive and collaborative experience that they get when using collaboration tools at home,” he says.

Gamification is a leading trigger for adoption

“Employee engagement techniques such as gamification and game-design principles help create incentives for users to engage and embrace tools more effectively,” says Siddiqi, adding that NTT Data has seen significant increases in collaborative tool engagement internally through the introduction of gamification.

Chris McKewon, founder and CEO of the managed services provider Xceptional Networks, agrees that gamification is the best way to encourage employees to use new tools.

“Gamification provides incentives for embracing the technology and to demonstrate how much more real work they can get done with these tools by selling the concepts in with benefits, not on features,” McKewon told CIO.com in an email interview.
Collaboration and the art of seduction

Ruven Gotz, director of collaboration services at the IT solutions vendor Avanade, says his team drives adoption by seduction.

“Our goal is to create collaboration experiences that users clearly recognize as the superior means to achieve the results they seek,” Gotz says.

When CIOs and IT leaders get enterprise collaboration right, there’s no need to drive adoption, Gotz says, because “employees recognize that we have provided a better working flow and will abandon other alternatives.”


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

<