Archive for the ‘ Tech ’ Category


If the beta version of Apple’s next mobile OS is causing problems on your iDevice, there’s an easy out

his is a time of temptation for Apple enthusiasts, many of whom are eager to get their hands — and devices — on the company’s newest software. Between June, when company execs tout the upcoming versions of Apple’s desktop and mobile operating systems, and the fall, when the polished, finished versions arrive, Apple users get a chance to serve as beta testers.

Having a hardcore set of fans eager to try out the latest software is a benefit that Apple has embraced. Last year, it allowed users to check out pre-release versions of OS X 10.10 Yosemite. This year, they can beta test OS X 10.11 El Capitan and — for the first time — an early version of the company’s mobile operating system — in this case, iOS 9. (Not available as a public beta is the pre-release build of Watch OS, which is a good thing; some of the developers that have tried it have found it to be unstable, and who wants to brick their brand new Apple Watch?)

To do so, users must sign up for Apple’s Beta Software Program, which is free. The program allows access to relatively stable versions of the pre-release software and gives Apple engineers a wider audience to test it. That, theoretically, leads to more bugs uncovered and fixed before the final release. Public betas roll out every few weeks — the most recent one arrived yesterday.

Apple

The problem with the time between beta and final releases is that many people who aren’t developers or technology insiders use their primary device to test what is actually unfinished software — and pre-release software is historically unstable, at best. Yes, Apple routinely warns you not to use your main iPhone, iPad or desktop to test the software. And users routinely ignore that advice.

But there’s good news for iPhone and iPad owners who took the plunge into iOS 9 and have now decided — whether because of problematic apps or the need for a more stable OS — they prefer iOS 8. You can downgrade your device, and it’s not even that difficult to do. But there is a caveat: Any data accumulated between the last time your device was backed up running iOS 8 and since the upgrade to iOS 9 will be lost, even if you recently backed up your data. Put simply, you cannot restore backup data from iOS 9 to a device running iOS 8; it’s not compatible. The best you can do is restore from the most recent backup of iOS 8.

Assuming you still want to return to iOS 8, here’s what to do.
If you’re a public beta tester (who hasn’t signed up to be full-fledged developer), you can downgrade your iDevice by putting it into DFU mode. (DFU stands for Device Firmware Update.) You use this method to restore iOS 8 without having to get the older operating system manually.

First, perform a backup via iCloud or iTunes. Even though you won’t be able to use this data on iOS 8, it’s always better to have a backup than not. Then go to Settings: iCloud: Find My iPhone and turn off Find My iPhone.

Then follow these instructions to put the iPhone into DFU mode: Turn off the iPhone and plug it into your computer. Hold the Home button down while powering on the phone, and hold both until you see the Apple logo disappear. You can release the power button, but continuing holding down the Home button until you see the iPhone’s screen display instructions to plug the device into an iTunes-compatible computer. When prompted on your computer, click on the option to Restore, and iTunes will download the latest released version of iOS for your device.

If you’re a developer, log into the Apple Developer portal (after you turn off Find My iPhone), click on the section for iOS and download the latest officially released build. As of now, that’s iOS 8.4. Once the software is downloaded, open iTunes and click on the iPhone/iPad/iDevice tab. Within the Info tab, there are two buttons: Update and Restore. Hold down the Option button on the keyboard while clicking Restore. Navigate to the file that was just downloaded and select it. The software will then erase the iPhone or iPad of its contents and install that previous version of iOS.

Note: When downgrading to the previous version, make sure to option-click Restore; do not choose Update. Doing that will lead to a loop in which the iPhone is placed in Recovery mode, iTunes attempts to download and install the latest official build, runs into errors, and then attempts to download another copy of the official build. It will do that until you break the cycle and choose to Restore the device. So again, don’t select Update.

Given that Apple software upgrades now routinely roll out in the fall, upgrading your devices to unstable software isn’t a good way to spend the summer. For most people, I’d recommend waiting. The latest features are really only worth having when your device is stable, especially if it’s something you rely on day in and day out. But if running the latest software is your thing, then by all means, have at it. And at least if you run into problems on your iDevice, you now know how to get out of trouble.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

Endpoint protection technology is making strides and may soon be touted as anti-virus

Rather than looking for signatures of known malware as traditional anti-virus software does, next-generation endpoint protection platforms analyze processes, changes and connections in order to spot activity that indicates foul play and while that approach is better at catching zero-day exploits, issues remain.

For instance, intelligence about what devices are doing can be gathered with or without client software. So businesses are faced with the choice of either going without a client and gathering less detailed threat information or collecting a wealth of detail but facing the deployment, management and updating issues that comes with installing agents.

Then comes the choice of how to tease out evidence that incursions are unfolding and to do so without being overwhelmed by the flood of data being collected. Once attacks are discovered, businesses have to figure out how to shut them down as quickly as possible.

Vendors trying to deal with these problems include those with broad product lines such as Cisco and EMC, established security vendors such as Bit9+Carbon Black FireEye, ForeScout, Guidance Software and Trend Micro, and newer companies focused on endpoint security such as Cylance, Light Cyber, Outlier Security and Tanium. That’s just a minute sampling; the field is crowded, and the competitors are coming up with varying ways to handle these issues.

The value of endpoint protection platforms is that they can identify specific attacks and speed the response to them once they are detected. They do this by gathering information about communications that go on among endpoints and other devices on the network, as well as changes made to the endpoint itself that may indicate compromise. The database of this endpoint telemetry then becomes a forensic tool for investigating attacks, mapping how they unfolded, discovering what devices need remediation and perhaps predicting what threat might arise next.

Agent or not?
The main aversion to agents in general is that they are one more piece of software to deploy, manage and update. In the case of next-gen endpoint protection, they do provide vast amounts of otherwise uncollectable data about endpoints, but that can also be a downside.

Endpoint agents gather so much information that it may be difficult to sort out the attacks from the background noise, so it’s important that the agents are backed by an analysis engine that can handle the volume of data being thrown at it, says Gartner analyst Lawrence Pingree. The amount of data generated varies depending on the agent and the type of endpoint.
security questions

Pingree and the NSS researchers
Without an agent, endpoint protection platforms can still gather valuable data about what machines are doing by tapping into switch and router data and monitoring Windows Network Services and Windows Management Instrumentation. This information can include who’s logged in to the machine, what the user does, patch levels, whether other security agents are running, whether USB devices are attached, what processes are running, etc.

Analysis can reveal whether devices are creating connections outside what they would be expected to make, a possible sign of lateral movement by attackers seeking ways to victimize other machines and escalate privileges.

Agents can mean one more management console, which means more complexity and potentially more cost, says Randy Abrams, a research director at NSS Labs who researches next-gen EPP platforms. “At some point that’s going to be a difference in head count,” he says, with more staff being required to handle all the consoles and that translates into more cost.

It’s also a matter of compatibility, says Rob Ayoub, also a research director at NSS Labs. “How do you insure any two agents – of McAfee and Bromium or Cylance – work together and who do you call if they don’t?”

Security of the management and administration of these platforms should be reviewed as well, Pingree says, to minimize insider threat to the platforms themselves. Businesses should look for EPP with tools that allow different levels of access for IT staff performing different roles. It would be useful, for example, if to authorize limited access for admins while incident-response engineers get greater access, he says.

Analysis engines
Analysis is essential but also complex, so much so that it can be a standalone service such as the one offered by Red Canary. Rather than gather endpoint data with its own agents, it employs sensors provided by Bit9+CarbonBlack. Red Canary supplements that data with threat intelligence gathered from a variety of other commercial security firms, analyzes it all and generates alerts about intrusion it finds on customers’ networks.

The analysis engine flags potential trouble, but human analysts check out flagged events to verify they are real threats. This helps corporate security analysts by cutting down on the number of alerts they have to respond to.

Startup Barkly says it’s working on an endpoint agent that locally analyzes what each endpoint is up to and automatically blocks malicious activity. It also notifies admins about actions it takes.

These engines need to be tied into larger threat-intelligence sources that characterize attacks by how they unfold, revealing activity that leads to a breach without using code that can be tagged as malware, says Abrams.

Most of what is known about endpoint detection and response tools is what the people who make them say they can do. So if possible businesses should run trials to determine first-hand features and effectiveness before buying. “The downside of emerging technologies is there’s very little on the testing side,” Pingree says.

Remediation
Endpoint detection tools gather an enormous amount of data that can be used tactically to stop attacks but also to support forensic investigations into how incursions progressed to the point of becoming exploits. This can help identify what devices need remediation, and some vendors are looking to automating that process.

For example Triumfant offers Resolution Manager that can restore endpoints to known good states after detecting malicious activity. Other vendors offer remediation features or say they are working on them, but the trend is toward using the same platforms to fix the problems they find.

The problem businesses face is that endpoints remain vulnerable despite the efforts of traditional endpoint security, which has evolved into security suites – anti-virus, anti-malware, intrusion detection, intrusion prevention, etc. While progressively working on the problem it leads to another problem.

“They have actually just added more products to the endpoint portfolio, thus taking us full circle back to bloated end points,” says Larry Whiteside, the CSO for the Lower Colorado River Authority. “Luckily, memory and disk speed (SSD) have kept that bulk from crippling endpoint performance.”

As a result he is looking at next-generation endpoint protection from SentinelOne. Security based on what endpoints are doing as opposed to seeking signatures of known malicious behavior is an improvement over traditional endpoint protection, he says. “Not saying signatures are totally bad, but that being a primary or only decision point is horrible. Therefore, adding behavior based detection capabilities adds value.”

So much value that he is more concerned about that than he is about whether there is a hard return on investment. “The reality is that I am more concerned about detection than I am ROI, so I may not even perform that analysis. I can say that getting into a next-gen at the right stage can be beneficial to an organization,” he says.

Anti-virus replacement?
So far vendors of next-generation endpoint protection have steered clear of claiming their products can replace anti-virus software, despite impressive test results. But that could be changing. Within a year, regulatory hurdles that these vendors face may disappear, says George Kurtz, CEO of CrowdStrike.

Within a year rules that require use of anti-virus in order to pass compliance tests will allow next-generation endpoint protection as well, he says. “That’s really our goal,” he says. “From the beginning we thought we could do that.”

He says everyone is focused on malware, but that represents just 40% of attacks. The rest he calls “malware-less intrusions” such as insider theft where attackers with credentials steal information without use of malware.

Until regulations are rewritten, it’s important for regulated businesses to meet the anti-virus requirement, Abrams says, even though other platforms may offer better protection. “It some cases that’s actually more important than the ability to protect because you won’t be protected from legal liabilities.”

Meanwhile having overlapping anti-virus and next-gen endpoint protection means larger enterprises are likely customers for now vs. smaller businesses with fewer resources, he says. But even for smaller businesses the cost may be worth it.

“What do they have to lose and how much does it cost to lose this information vs how much does it cost to protect it?” Abrams says. “


 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

Top 10 job boards on Twitter

Written by admin
July 16th, 2015

Top 10 job boards on Twitter

Celebrities, politicians and companies all have a Twitter account today, so why not job boards? Here are 10 job boards that are using Twitter better than the competition.

Top job boards on Twitter
Twitter isn’t just for celebrities, companies and parody accounts. It’s now an outlet for job boards as well. Turning to Twitter in your job search might not feel natural, but Twitter is becoming a popular recruitment tool. As social media becomes a mainstay of our everyday lives, it’s also become a part of your job search as well.

Engagement Labs, creators of eValue, which rates how well companies use social media, rates successful uses of social media based on likes, follows and overall audience engagement. Here are 10 social job boards using Twitter better than the competition.

#1 Twitter: Monster
Monster’s main Twitter handle, where the company shares both unique and shared content, has over 150,000 followers. It’s eValue score was “20 percent higher than their nearest competitor,” according to Engagement Labs, along with the highest impact score, indicating their content is reaching a large — and interested — audience.

#2 Twitter: CareerOneStop
CareerOneStop is another socially successful government website, coming in second for its use of Twitter and its ability to engage with its audience of over 5,000 followers. The website is sponsored by the U.S. Department of Labor and offers a number of helpful resources for job seekers in every industry.

#3 Twitter: ZipRecruiter
ZipRecruiter may have a modest following of around 4,000 on Twitter, but the company has created a social outlet for its services and its followers are engaged in the experience. ZipRecruiter posts a number of job-seeker-related content, updates about the company, industry updates and, of course, job listings. The site pulls in jobs from other well-known job boards including Monster, Glassdoor and SimplyHired, just to name a few.

#4 Twitter: AOL Jobs
AOL has come a long way since it dominated the Internet back in the 90s, but the company has since moved on from dial-up tones and mailing out its latest software. The Internet company has now extended its reach into the job market, with AOL Jobs, and it’s getting the right feedback on Twitter to put it at number 4 on the list of job boards using Twitter. With over 13,000 followers, AOL Jobs’ twitter feed mostly features original – and interesting — job-seeker focused content that will draw you into the homepage for AOL Jobs.

#5 Twitter: FlexJobs
FlexJobs helps you find jobs that aren’t your typical 9-to-5 office roles. It includes remote opportunities, freelance work and other less conventional career listings on its jobs board. FlexJob’s Twitter account, with more than 8,000 followers, houses content related to flexible job schedules, remote work and telecommuting. Its number 5 on the list of companies with the most powerful social job boards, so if you’re looking for remote, part-time or freelance work, it might be the right account to follow.

#6 Twitter: CareerBuilder
CareerBuilder is a well-known career site and jobs board, but it also dominates the top 10 list for Twitter. At number 6, CareerBuilder uses its Twitter account to connect with nearly 150,000 followers and share content related to job searching, employment, recent college graduates and, of course, job postings.

#7 Twitter: Mediabistro
Mediabistro is more than a jobs board. The website also includes educational programs, articles and industry events in addition to job listings. Its Twitter account, with over 170,000 followers is no different. The social account features job listings, information for job seekers, tips and strategies for finding the right job and more. Mediabistro also poses questions to its followers as well as funny hashtags and memes, going the extra mile to connect with followers.

#8 Twitter: Glassdoor
Glassdoor was a pioneer for job seekers, bringing them reliable salary data and reviews from current and former employees a large number of companies. It’s now channeling its know-how and data into a well-rounded Twitter account with over 80,000 followers. The company features original content, shared articles and job search statistics on Twitter, making it another great option to follow if you are in the market for a new job.

#9 Twitter: Snagajob
Snagajob isn’t successful only on Facebook, it also makes the top 10 list for Twitter. It’s clear that Snagajob is trying to connect with its millennial followers, with its use of emojis and references to pop culture, and it seems to be working. The account has over 14,000 followers and scored high on the list of companies using Twitter effectively.

#10 Twitter: TheLadders
Similar to other jobs boards, TheLadders has a wealth of job-seeker related content on its Twitter account. With over 60,000 followers, TheLadders shares and posts content from its own site, articles from other sources and networking tips. It’s focused on connecting with driven job seekers who want to push their career onward and upward, and its Twitter efforts seem to be doing the trick.


 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Even as PC business contracts for 14th straight quarter, Mac sales surge 16%

Skittish about the impact of Windows 10, including the free upgrade-from-Windows-7-and-8.1 offer, computer makers drew down inventories and sent PC shipments plummeting in the June quarter, IDC said today.

The quarter was among the worst ever for personal computers, according to the research firm, which estimated the year-over-year contraction at 11.8%. That decline was bested only twice before in the two decades that IDC has tracked shipments: in early 2013, when the January quarter was off 13% and the September quarter of 2001, which posted a decline of 12%.

OEMs (original equipment manufacturers) shipped approximately 66 million systems in the three months that ended June 30, IDC said, down from the 75 million during the same stretch in 2014.

The dramatic downturn was due to several factors, said IDC analyst Loren Loverde, who runs IDC’s PC forecast team, including a tough comparative from last year as enterprises scrambled to replace obsolete Windows XP machines. The 2001 operating system was retired by Microsoft in April 2014.

But Windows 10 also played a part, Loverde contended. “We’ve heard from various parties, including ODMs [original device manufacturers], component makers and distributors, that they’ve reduced inventory as Windows 10 approached,” he said.

Although the industry is more bullish about Windows 10 than its predecessor, Windows 8, that’s not been reflected in larger shipments simply because OEMs aren’t sure how the new OS will play out in the coming quarter or two. To safeguard against overstocking the channel, and to some extent preparing for the launch of Windows 10, OEMs played it conservative and tightened inventories by building fewer PCs.

“Although it’s very difficult to quantify, I’d say that this inventory reduction is a little bit more dramatic than before Windows 8,” said Loverde.

Three years ago, inventories surged as PC makers cranked out devices — 85 million in the second quarter of 2012, 88 million in the third — figuring that Windows 8 was going to be a big hit and juice sales. That didn’t happen.

“There were a lot of [retail and distribution] customers buying additional inventory and promoting Windows 8,” Loverde said. “The [negative] impact on inventory is more substantial this time, and everyone is taking a wait-and-see approach, thinking that they’ll make decisions in the second half of the year.”

Some of the nervousness on the part of computer makers revolves around the upgrade offer Microsoft will extend to all consumers and many businesses with existing PCs running Windows 7 or Windows 8.1. Starting July 29, Microsoft will give those customers a free upgrade to Windows 10. The deal will expire a year later, on July 29, 2016.

Because Microsoft has never before offered a free upgrade of this magnitude, it’s uncharted territory for Windows OEMs. A host of unknowns, ranging from whether the free upgrade will keep significant numbers on old hardware to the eventual reaction to the new OS, have made computer makers edgy about committing to fully packing the channel.

“It’s even riskier when the market is declining,” Loverde said of carrying large inventories.

And the PC business has been in decline, and will continue to contract.

IDC has held to its prediction that for 2015, global PC shipments will be down 6.2% from last year’s 308 million, or to around 289 million. (That may change to an even more depressing number; Loverde said IDC had not yet adjusted the figure to account for the worse-than-expected second quarter.) In 2016, the industry will shrink by another 2%.

The brightest spot in the quarter’s forecast was again Apple, which IDC had in the OEM fourth spot with shipments of 5.1 million Macs, a year-over-year jump of 16%. Other manufacturers in the top five — Lenovo, HP, Dell and Acer — were pegged with declines of 8%, 10%, 9% and 27%, respectively.

“Apple’s a pretty unique company,” said Loverde. “They’ve cultivated their market position and product portfolio, and, of course, their positioning is towards more affluent buyers who are not as price sensitive.”

Loverde was convinced that some of the Mac’s strong sales in the June quarter benefited from uncertainties about Windows 10 on the part of consumers.

Unclear, said Loverde, is how the Mac will fare if, as IDC and others believe, Apple introduces a larger iPad later this year, a tablet better geared to the productivity chores typically handled by personal computers.

“I think there will be some impact on Mac shipments, but Apple is always willing to cannibalize its own products,” he said. “But the upside on tablets [generated by a larger iPad] and as a brand is bigger than the risk.”

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

SDN will support IoT by centralizing control, abstracting network devices, and providing flexible, dynamic, automated reconfiguration of the network

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

Organizations are excited about the business value of the data that will be generated by the Internet of Things (IoT). But there’s less discussion about how to manage the devices that will make up the network, secure the data they generate and analyze it quickly enough to deliver the insights businesses need.

Software defined networking (SDN) can help meet these needs. By virtualizing network components and services, they can rapidly and automatically reconfigure network devices, reroute traffic and apply authentication and access rules. All this can help speed and secure data delivery, and improve network management, for even the most remote devices.

SDN enables the radical simplification of network provisioning with predefined policies for plug-and-play set-up of IoT devices, automatic detection and remediation of security threats, and the provisioning of the edge computing and analytics environments that turn data into insights.

Consider these two IoT use cases:
* Data from sensors within blowout preventers can help oil well operators save millions of dollars a year in unplanned downtime. These massive data flows, ranging from pressure readings to valve positions, are now often sent from remote locations to central servers over satellite links. This not only increases the cost of data transmission but delays its receipt and analysis. This latency can be critical – or even deadly – when the data is used to control powerful equipment or sensitive industrial processes.

Both these problems will intensify as falling prices lead to the deployment of many more sensors, and technical advances allow each sensor to generate much more data. Processing more data at the edge (i.e. near the well) and determining which is worth sending to a central location (what some call Fog or Edge Computing) helps alleviate both these problems. So can the rapid provisioning of network components and services, while real-time application of security rules helps protect proprietary information.

* Data from retail environments, such as from a customer’s smartphone monitoring their location and the products they take pictures of, or in-store sensors monitoring their browsing behavior, can be used to deliver customized offers to encourage an immediate sale. Again, the volume of data and the need for fast analysis and action calls for the rapid provisioning of services and edge data processing, along with rigorous security to ease privacy concerns.

Making such scenarios real requires overcoming unprecedented challenges.
One is the sheer number of devices, which is estimated to reach 50 billion by 2020, with each new device expanding the “attack surface” exposed to hackers. Another is the amount of data moving over this network, with IDC projecting IoT will account for 10% of all data on the planet by 2020.

Then there is the variety of devices that need to be managed and supported. These range from network switches supporting popular management applications and protocols, to legacy SCADA (supervisory control and data acquisition) devices and those that lack the compute and/or memory to support standard authentication or encryption. Finally, there is the need for very rapid, and even real-time, response, especially for applications involving safety (such as hazardous industrial processes) or commerce (such as monitoring of inventory or customer behavior).

Given this complexity and scale, manual network management is simply not feasible. SDN provides the only viable, cost-effective means to manage the IoT, secure the network and the data on it, minimize bandwidth requirements and maximize the performance of the applications and analytics that use its data.

SDN brings three important capabilities to IoT:
Centralization of control through software that has complete knowledge of the network, enabling automated, policy-based control of even massive, complex networks. Given the huge potential scale of IoT environments, SDN is critical in making them simple to manage.

Abstraction of the details of the many devices and protocols in the network, allowing IoT applications to access data, enable analytics and control the devices, and add new sensors and network control devices, without exposing the details of the underlying infrastructure. SDN simplifies the creation, deployment and ongoing management of the IoT devices and the applications that benefit from them.

The flexibility to tune the components within the IoT (and manage where data is stored and analyzed) to continually maximize performance and security as business needs and data flows change. IoT environments are inherently disperse with many end devices and edge computing. As a result, the network is even more critical than in standard application environments. SDN’s ability to dynamically change network behavior based on new traffic patterns, security incidents andpolicy changes will enable IoT environments to deliver on their promise.

For example, through the use ofpredefined policies for plug-and-play set up, SDN allows for the rapid and easy addition of new types of IoT sensors. By abstracting network services from the hardware on which they run, SDN allows automated, policy-based creation of virtual load balancers, quality of service for various classes of traffic, and the provisioning of network resources for peak demands.

The ease of adding and removing resources reduces the cost and risk of IoT experiments by allowing the easy deprovisioning and reuse of the network infrastructure when no longer needed.

SDN will make it easier to find and fight security threats through the improved visibility they provide into network traffic right to the edge of the network. They also make it easy to apply automated policies to redirect suspicious traffic to, for example, a honeynet where it can be safely examined. By making networking management less complex, SDN allows IT to set and enforce more segmented access controls.

SDN can provide a dynamic, intelligent, self-learning layered model of security that provides walls within walls and ensures people can only change the configuration of the devices they’re authorized to “touch.” This is far more useful than the traditional “wall” around the perimeter of the network, which won’t work with the IoT because of its size and the fact the enemy is often inside the firewall, in the form of unauthorized actors updating firmware on unprotected devices.

Finally, by centralizing configuration and management, SDN will allow IT to effectively program the network to make automatic, real-time decisions about traffic flow. They will allow the analysis of not only sensor data, but data about the health of the network, to be analyzed close to the network edge to give IT the information it needs to prevent traffic jams and security risks. The centralized configuration and management of the network, and the abstraction of network devices, also makes it far easier to manage applications that run on the edge of the IoT.

For example, SDN will allow IT to fine-tune data aggregation, so data that is less critical is held at the edge and not transmitted to core systems until it won’t slow critical application traffic. This edge computing can also perform fast, local analysis and speed the results to the network core if the analysis indicates an urgent situation, such as the impending failure of a jet engine.

Prepare Now
IT organizations can become key drivers in capturing the promised business value of IoT through the use of SDNs. But this new world is a major change and will require some planning.

To prepare for the intersection of IoT and SDN, you should start thinking about what policies in areas such as security, Quality of Service (QoS) and data privacy will make sense in the IoT world, and how to structure and implement such policies in a virtualized network.

All companies have policies today, but typically they are implicit – that is – buried in a morass of ACLs and network configurations. SDN will turn this process on its head, allowing IT teams to develop human readable policies that are implemented by the network. IT teams should start understanding how they’ve configured today’s environment so that they can decide what policies should be brought forward.

They should plan now to include edge computing and analytics in their long-term vision of the network. At the same time, they should remember that IoT and SDN are in their early stages, meaning their network and application planners should expect unpredicted changes in, for example, the amounts of data their networks must handle, and the need to dynamically reconfigure them for local rather than centralized processing. The key enablers, again, will be centralization of control, abstraction of network devices and flexible, dynamic automated reconfiguration of the network. Essentially, isolation of network slices to segment the network by proactively pushing policy via a centralized controller to cordon off various types of traffic. Centralized control planes offer the advantages of easy operations and management.

IT teams should also evaluate their network, compute and data needs across the entire IT spectrum, as the IoT will require an end-to-end SDN solution encompassing all manner of devices, not just those from one domain within IT, but across the data center, Wide Area Network (WAN) and access.

Lastly, IT will want to get familiar with app development in edge computing environments, which is a mix of local and centralized processing. As network abstraction to app layer changes and becomes highly programmable, network teams need to invest in resources and training that understand these programming models (e.g. REST) so that they can more easily partner with the app development teams.

IoT will be so big, so varied and so remote that conventional management tools just won’t cut it. Now is the time to start learning how SDN can help you manage this new world and assure the speedy, secure delivery and analysis of the data it will generate.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

SDN will support IoT by centralizing control, abstracting network devices, and providing flexible, dynamic, automated reconfiguration of the network

Organizations are excited about the business value of the data that will be generated by the Internet of Things (IoT). But there’s less discussion about how to manage the devices that will make up the network, secure the data they generate and analyze it quickly enough to deliver the insights businesses need.

Software defined networking (SDN) can help meet these needs. By virtualizing network components and services, they can rapidly and automatically reconfigure network devices, reroute traffic and apply authentication and access rules. All this can help speed and secure data delivery, and improve network management, for even the most remote devices.

SDN enables the radical simplification of network provisioning with predefined policies for plug-and-play set-up of IoT devices, automatic detection and remediation of security threats, and the provisioning of the edge computing and analytics environments that turn data into insights.

Consider these two IoT use cases:
* Data from sensors within blowout preventers can help oil well operators save millions of dollars a year in unplanned downtime. These massive data flows, ranging from pressure readings to valve positions, are now often sent from remote locations to central servers over satellite links. This not only increases the cost of data transmission but delays its receipt and analysis. This latency can be critical – or even deadly – when the data is used to control powerful equipment or sensitive industrial processes.

Both these problems will intensify as falling prices lead to the deployment of many more sensors, and technical advances allow each sensor to generate much more data. Processing more data at the edge (i.e. near the well) and determining which is worth sending to a central location (what some call Fog or Edge Computing) helps alleviate both these problems. So can the rapid provisioning of network components and services, while real-time application of security rules helps protect proprietary information.

* Data from retail environments, such as from a customer’s smartphone monitoring their location and the products they take pictures of, or in-store sensors monitoring their browsing behavior, can be used to deliver customized offers to encourage an immediate sale. Again, the volume of data and the need for fast analysis and action calls for the rapid provisioning of services and edge data processing, along with rigorous security to ease privacy concerns.

Making such scenarios real requires overcoming unprecedented challenges.
One is the sheer number of devices, which is estimated to reach 50 billion by 2020, with each new device expanding the “attack surface” exposed to hackers. Another is the amount of data moving over this network, with IDC projecting IoT will account for 10% of all data on the planet by 2020.

Then there is the variety of devices that need to be managed and supported. These range from network switches supporting popular management applications and protocols, to legacy SCADA (supervisory control and data acquisition) devices and those that lack the compute and/or memory to support standard authentication or encryption. Finally, there is the need for very rapid, and even real-time, response, especially for applications involving safety (such as hazardous industrial processes) or commerce (such as monitoring of inventory or customer behavior).

Given this complexity and scale, manual network management is simply not feasible. SDN provides the only viable, cost-effective means to manage the IoT, secure the network and the data on it, minimize bandwidth requirements and maximize the performance of the applications and analytics that use its data.

SDN brings three important capabilities to IoT:

Centralization of control through software that has complete knowledge of the network, enabling automated, policy-based control of even massive, complex networks. Given the huge potential scale of IoT environments, SDN is critical in making them simple to manage.

Abstraction of the details of the many devices and protocols in the network, allowing IoT applications to access data, enable analytics and control the devices, and add new sensors and network control devices, without exposing the details of the underlying infrastructure. SDN simplifies the creation, deployment and ongoing management of the IoT devices and the applications that benefit from them.

The flexibility to tune the components within the IoT (and manage where data is stored and analyzed) to continually maximize performance and security as business needs and data flows change. IoT environments are inherently disperse with many end devices and edge computing. As a result, the network is even more critical than in standard application environments. SDN’s ability to dynamically change network behavior based on new traffic patterns, security incidents andpolicy changes will enable IoT environments to deliver on their promise.

For example, through the use ofpredefined policies for plug-and-play set up, SDN allows for the rapid and easy addition of new types of IoT sensors. By abstracting network services from the hardware on which they run, SDN allows automated, policy-based creation of virtual load balancers, quality of service for various classes of traffic, and the provisioning of network resources for peak demands.

The ease of adding and removing resources reduces the cost and risk of IoT experiments by allowing the easy deprovisioning and reuse of the network infrastructure when no longer needed.

SDN will make it easier to find and fight security threats through the improved visibility they provide into network traffic right to the edge of the network. They also make it easy to apply automated policies to redirect suspicious traffic to, for example, a honeynet where it can be safely examined. By making networking management less complex, SDN allows IT to set and enforce more segmented access controls.

SDN can provide a dynamic, intelligent, self-learning layered model of security that provides walls within walls and ensures people can only change the configuration of the devices they’re authorized to “touch.” This is far more useful than the traditional “wall” around the perimeter of the network, which won’t work with the IoT because of its size and the fact the enemy is often inside the firewall, in the form of unauthorized actors updating firmware on unprotected devices.

Finally, by centralizing configuration and management, SDN will allow IT to effectively program the network to make automatic, real-time decisions about traffic flow. They will allow the analysis of not only sensor data, but data about the health of the network, to be analyzed close to the network edge to give IT the information it needs to prevent traffic jams and security risks. The centralized configuration and management of the network, and the abstraction of network devices, also makes it far easier to manage applications that run on the edge of the IoT.

For example, SDN will allow IT to fine-tune data aggregation, so data that is less critical is held at the edge and not transmitted to core systems until it won’t slow critical application traffic. This edge computing can also perform fast, local analysis and speed the results to the network core if the analysis indicates an urgent situation, such as the impending failure of a jet engine.

Prepare Now

IT organizations can become key drivers in capturing the promised business value of IoT through the use of SDNs. But this new world is a major change and will require some planning.

To prepare for the intersection of IoT and SDN, you should start thinking about what policies in areas such as security, Quality of Service (QoS) and data privacy will make sense in the IoT world, and how to structure and implement such policies in a virtualized network.

All companies have policies today, but typically they are implicit – that is – buried in a morass of ACLs and network configurations. SDN will turn this process on its head, allowing IT teams to develop human readable policies that are implemented by the network. IT teams should start understanding how they’ve configured today’s environment so that they can decide what policies should be brought forward.

They should plan now to include edge computing and analytics in their long-term vision of the network. At the same time, they should remember that IoT and SDN are in their early stages, meaning their network and application planners should expect unpredicted changes in, for example, the amounts of data their networks must handle, and the need to dynamically reconfigure them for local rather than centralized processing. The key enablers, again, will be centralization of control, abstraction of network devices and flexible, dynamic automated reconfiguration of the network. Essentially, isolation of network slices to segment the network by proactively pushing policy via a centralized controller to cordon off various types of traffic. Centralized control planes offer the advantages of easy operations and management.

IT teams should also evaluate their network, compute and data needs across the entire IT spectrum, as the IoT will require an end-to-end SDN solution encompassing all manner of devices, not just those from one domain within IT, but across the data center, Wide Area Network (WAN) and access.

Lastly, IT will want to get familiar with app development in edge computing environments, which is a mix of local and centralized processing. As network abstraction to app layer changes and becomes highly programmable, network teams need to invest in resources and training that understand these programming models (e.g. REST) so that they can more easily partner with the app development teams.

IoT will be so big, so varied and so remote that conventional management tools just won’t cut it. Now is the time to start learning how SDN can help you manage this new world and assure the speedy, secure delivery and analysis of the data it will generate.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

In the earliest days of Amazon.com SQL databases weren’t cutting it, so the company created DynamoDB and in doing so helped usher in the NoSQL market

Behind every great ecommerce website is a database, and in the early 2000s Amazon.com’s database was not keeping up with the company’s business.

Part of the problem was that Amazon didn’t have just one database – it relied on a series of them, each with its own responsibility. As the company headed toward becoming a $10 billion business, the number and size of its SQL databases exploded and managing them became more challenging. By the 2004 holiday shopping rush, outages became more common, caused in large part by overloaded SQL databases.

Something needed to change.
But instead of looking for a solution outside the company, Amazon developed its own database management system. It was a whole new kind of database, one that threw out the rules of traditional SQL varieties and was able to scale up and up and up. In 2007 Amazon shared its findings with the world: CTO Werner Vogels and his team released a paper titled “Dynamo – Amazon’s highly available key value store.” Some credit it with being the moment that the NoSQL database market was born.

The problem with SQL
The relational databases that have been around for decades and most commonly use the SQL programming language are ideal for organizing data in neat tables and running queries against them. Their success is undisputed: Gartner estimates the SQL database market to be $30 billion.

But in the early to mid-2000s, companies like Amazon, Yahoo and Google had data demands that SQL databases just didn’t address well. (To throw a bit of computer science at you, the CAP theorem states that it’s impossible for a distributed system, such as a big database, to have consistency, availability and fault tolerance. SQL databases prioritize consistency over speed and flexibility, which makes them great for managing core enterprise data such as financial transactions, but not other types of jobs as well.)

Take Amazon’s online shopping cart service, for example. Customers browse the ecommerce website and put something in their virtual shopping cart where it is saved and potentially purchased later. Amazon needs the data in the shopping cart to always be available to the customer; lost shopping cart data is a lost sale. But, it doesn’t necessarily need every node of the database all around the world to have the most up-to-date shopping cart information for every customer. A SQL/relational system would spend enormous compute resources to make data consistent across the distributed system, instead of ensuring the information is always available and ready to be served to customers.

One of the fundamental tenets of Amazon’s Dynamo, and NoSQL databases in general, is that they sacrifice data consistency for availability. Amazon’s priority is to maintain shopping cart data and to have it served to customers very quickly. Plus, the system has to be able to scale to serve Amazon’s fast-growing demand. Dynamo solves all of these problems: It backs up data across nodes, and can handle tremendous load while maintaining fast and dependable performance.

“It was one of the first NoSQL databases,” explains Khawaja Shams, head of engineering at Amazon DynamoDB. “We traded off consistency and very rigid

querying semantics for predictable performance, durability and scale – those are the things Dynamo was super good at.”

DynamoDB: A database in the cloud
Dynamo fixed many of Amazon’s problems that SQL databases could not. But throughout the mid-to-late 2000s, it still wasn’t perfect. Dynamo boasted the functionality that Amazon engineers needed, but required substantial resources to install and manage.

The introduction of DynamoDB in 2012 proved to be a major upgrade though. The hosted version of the database Amazon uses internally lives in Amazon Web Services’ IaaS cloud and is fully managed. Amazon engineers and AWS customers don’t provision a database or manage storage of the data. All they do is request the throughput they need from DynamoDB. Customers pay $0.0065 per hour for about 36,000 writes to the database (meaning the amount of data imported to the database per hour) plus $0.25 per GB of data stored in the system per month. If the application needs more capacity, then with a few clicks the database spreads the workload over more nodes.

AWS is notoriously opaque about how DynamoDB and many of its other Infrastructure-as-service products run under the covers, but this promotional video reveals that the service employs solid state drives and notes that when customers use DynamoDB, their data is spread across availability zones/data centers to ensure availability.

Forrester principal analyst Noel Yuhanna calls it a “pretty powerful” database and considers it one of the top NoSQL offerings, especially for key-value store use cases.

DynamoDB has grown significantly since its launch. While AWS will not release customer figures, company engineer James Hamilton said in November that DynamoDB has grown 3x in requests it processes annually and 4x in the amount of data it stores compared to the year prior. Even with that massive scale and growth, DynamoDB has consistently returned queries in three to four milliseconds.

Below is a video demonstrating DynamoDB’s remarkably consistent performance even as more stress is put on the system.

To see a demo of DynamoDB, jump to the 16:47 mark in the video.
Feature-wise, DynamoDB has grown, too. NoSQL databases are generally broken into a handful of categories: Key-value store databases organize information with a key and a value; document databases allow full documents to be searched against; while graph databases track connections between data. DynamoDB originally started as a key-value database, but last year AWS expanded itto become a document database by supporting JSON formatted files. AWS last year also added Global Secondary Indexes to DynamoDB, which allow users to have copies of their database, typically one for production and another for querying, analytics or testing.

NoSQL’s use case and vendor landscape
The fundamental advantage of NoSQL databases is their ability to scale and have flexible schema, meaning users can easily change how data is structured and run multiple queries against it. Many new web-based applications, such as social, mobile and gaming-centric ones, are being built using NoSQL databases.

While Amazon may have helped jumpstart the NoSQL market, it is now one of dozens of vendors attempting to cash in on it. Nick Heudecker, a Gartner researcher, stresses that even though NoSQL has captured the attention of many developers, it is still a relatively young technology. He estimates revenues of NoSQL products to not even surpass half a billion dollars annually (that’s not an official Gartner estimate). Heudecker says the majority of his enterprise client inquiries are still around SQL databases.

NoSQL competitors MongoDB, MarkLogic, Couchbase and Datastax have strong standings in the market as well and some seem to have greater traction among enterprise customers compared to DynamoDB, Huedecker says.

Living in the cloud

What’s holding DynamoDB back in the enterprise market? For one, it has no on-premises version – it can only be used in AWS’s cloud. Some users just aren’t comfortable using a cloud-based database, Heudecker says. DynamoDB competitors offer users the opportunity to run databases on their own premises behind their own firewall.

Khawaja Shams, director of engineering for DynamoDB says when the company created Dynamo it had to throw out the old rules of SQL databases.

Shams, AWS’s DynamoDB engineering head, says because the technology is hosted in the cloud, users don’t have to worry about configuring or provisioning any hardware. They just use the service and scale it up or down based on demand, while paying only for storage and throughput, he says.

For security-sensitive customers, there are opportunities to encrypt data as DynamoDB stores it. Plus, DynamoDB is integrated with AWS – the market’s leading IaaS platform (according to Gartner’s Magic Quadrant report), which supports a variety of tools, including other relational databases such as Aurora and RDS.

Adroll rolls with AWS DynamoDB

Marketing platform provider Adroll, which serves more than 20,000 customers in 150 countries, is among those organizations comfortable using the cloud-based DynamoDB. Basically, if an ecommerce site visitor browses a product page but does not buy the item, AdRoll bids on ad space on another site the user visits to show the product they were previously considering. It’s an effective method for getting people to buy products they were considering.

It’s really complicated for AdRoll to figure out which ads to serve to which users though. Even more complicated is that AdRoll needs to decide in about the time it takes for a webpage to load whether it will bid on an ad spot and which ad to place. That’s the job of CTO Valentino Volonghi –he has about 100 milliseconds to play with. Most of that time is gobbled up by network latency, so needless to say AdRoll requires a reliably fast platform. It also needs huge scale: AdRoll considers more than 60 billion ad impressions every day.

AdRoll uses DynamoDB and Amazon’s Simple Storage Service (S3) to sock away data about customers and help its algorithm decide which ads to buy for customers. In 2013, AdRoll had 125 billion items in DynamoDB; it’s now up to half a trillion. It makes 1 million requests to the system each second, and the data is returned in less than 5 milliseconds — every time. AdRoll has another 17 million files uploaded into Amazon S3, taking up more than 1.5 petabytes of space.

AdRoll didn’t have to build a global network of data centers to power its product, thanks in large part to using DynamoDB.

“We haven’t spent a single engineer to operate this system,” Volonghi says. “It’s actually technically fun to operate a database at this massive scale.”

Not every company is going to have the needs of Amazon.com’s ecommerce site or AdRoll’s real-time bidding platform. But many are struggling to achieve greater scale without major capital investments. The cloud makes that possible, and DynamoDB is a prime example.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

The interim CEO would have to leave his post at Square to take over at Twitter

A week and a half after Dick Costolo announced that he would be stepping down from the CEO role at Twitter, the company’s board of directors has sent a shot across the bow of one of the expected front-runner candidates to take the social network’s top job.

The social micro-blogging company’s search committee will only consider CEO candidates “who are in a position to make a full-time commitment to Twitter,” the board said.That would seem to rule out Jack Dorsey, the company’s co-founder who currently works as the CEO of Square and will be filling in as interim CEO of Twitter.

Dorsey has said that he plans to remain at the helm of the payment processing company he co-founded, but hasn’t explicitly ruled out a bid for a permanent berth in Twitter’s top job. Now the Twitter board has made it clear that he would have to depart Square if he wants to run Twitter. That’s a rough proposition for Dorsey, especially since Square is reportedly planning to go public this year.

As for the overall search process, Twitter’s search committee has contracted with executive search firm Spencer Stuart to evaluate internal and external candidates for the job. The board hasn’t set a firm time frame for its hiring of a new CEO, saying that there’s a “sense of urgency” to the process but that it will take its time to find the right person for the job.

Whoever steps into the top spot at Twitter will have to contend with increased pressure on the company from Wall Street. Investors have been disappointed by Twitter’s revenue and user growth in recent quarters.


 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Machine intelligence can be used to police networks and fill gaps where the available resources and capabilities of human intelligence are clearly falling short

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

Humans are clearly incapable of monitoring and identifying every threat on today’s vast and complex networks using traditional security tools. We need to enhance human capabilities by augmenting them with machine intelligence. Mixing man and machine – in some ways, similar to what OmniCorp did with RoboCop – can heighten our ability to identify and stop a threat before it’s too late.

The “dumb” tools that organizations rely on today are simply ineffective. There are two consistent, yet still surprising things that make this ineptitude fairly apparent. The first is the amount of time hackers have free reign within a system before being detected: eight months at Premera and P.F. Chang’s, six months at Nieman Marcus, five months at Home Depot, and the list goes on.

The second surprise is the response. Everyone usually looks backwards, trying to figure out how the external actors got in. Finding the proverbial leak and plugging it is obviously important, but this approach only treats a symptom instead of curing the disease.

The disease, in this case, is the growing faction of hackers that are getting so good at what they do they can infiltrate a network and roam around freely, accessing more files and data than even most internal employees have access to. If it took months for Premera, Sony, Target and others to detect these bad actors in their networks and begin to patch the holes that let them in, how can they be sure that another group didn’t find another hole? How do they know other groups aren’t pilfering data right now? Today, they can’t know for sure.

The typical response
Until recently, companies have really only had one option as a response to rising threats, a response that most organizations still employ. They re-harden systems, ratchet-up firewall and IDS/IPS rules and thresholds, and put stricter web proxy and VPN policies in place. But by doing this they drown their incident response teams in alerts.

Tightening policies and adding to the number of scenarios that will raise a red flag just makes the job more difficult for security teams that are already stretched thin. This causes thousands of false positives every day, making it physically impossible to investigate every one. As recent high profile attacks have proven, the deluge of alerts is helping malicious activity slip through the cracks because, even when it is “caught,” nothing is being done about it.

In addition, clamping down on security rules and procedures just wastes everyone’s time. By design, tighter policies will restrict access to data, and in many cases, that data is what employees need to do their jobs well. Employees and departments will start asking for the tools and information they need, wasting precious time for them and the IT/security teams that have to vet every request.

Putting RoboCop on the case
Machine intelligence can be used to police massive networks and help fill gaps where the available resources and capabilities of human intelligence are clearly falling short. It’s a bit like letting RoboCop police the streets, but in this case the main armament is statistical algorithms. More specifically, statistics can be used to identify abnormal and potentially malicious activity as it occurs.

According to Dave Shackleford, an analyst at SANS Institute and author of its 2014 Analytics and Intelligence Survey, “one of the biggest challenges security organizations face is lack of visibility into what’s happening in the environment.” The survey of 350 IT professionals asked why they have difficulty identifying threats and a top response was their inability to understand and baseline “normal behavior.” It’s something that humans just can’t do in complex environments, and since we’re not able to distinguish normal behavior, we can’t see abnormal behavior.

Instead of relying on humans looking at graphs on big screen monitors, or human-defined rules and thresholds to raise flags, machines can learn what normal behavior looks like, adjusting in real time and becoming smarter as they processes more information. What’s more, machines possess the speed required to process the massive amount of information that networks create, and they can do it in near-real time. Some networks process terabytes of data every second, while humans, on the other hand, can process no more than 60 bits per second.

Putting aside the need for speed and capacity, a larger issue with the traditional way of monitoring for security issues is rules are dumb. That’s not just name calling either, they’re literally dumb. Humans set rules that tell the machine how to act and what to do – the speed and processing capacity is irrelevant. While rule-based monitoring systems can be very complex, they’re still built on a basic “if this, then do that” formula. Enabling machines to think for themselves and feed better data and insight to the humans that rely on them is what will really improve security.

It’s almost absurd to not have a layer of security that thinks for itself. Imagine in the physical world if someone was crossing the border every day with a wheelbarrow full of dirt and the customs agents, being diligent at their jobs and following the rules, were sifting through that dirt day after day, never finding what they thought they were looking for. Even though that same person repeatedly crosses the border with a wheelbarrow full of dirt, no one ever thinks to look at the wheelbarrow. If they had, they would have quickly learned he’s been stealing wheelbarrows the whole time!

Just because no one told the customs agents to look for stolen wheelbarrows doesn’t make it OK, but as they say, hindsight is 20/20. In the digital world, we don’t have to rely on hindsight anymore, especially now that we have the power to put machine intelligence to work and recognize anomalies that could be occurring right under our noses. In order for cyber-security to be effective today, it needs at least a basic level of intelligence. Machines that learn on their own and detect anomalous activity can find the “wheelbarrow thief” that might be slowly syphoning data, even if you don’t specifically know that you’re looking for him.

Anomaly detection is among the first technology categories where machine learning is being put to use to enhance network and application security. It’s a form of advanced security analytics, which is a term that’s used quite frequently. However, there are a few requirements this type of technology must meet to truly be considered “advanced.” It must be easily deployed to operate continuously, against a broad array of data types and sources, and at huge data scales to produce high fidelity insights so as not to further add to the alert blindness already confronting security teams.

Leading analysts agree that machine learning will soon be a “need to have” in order to protect a network. In a Nov. 2014 Gartner report titled, “Add New Performance Metrics to Manage Machine-Learning-Enabled Systems,” analyst Will Cappelli directly states, “machine learning functionality will, over the next five years, gradually become pervasive and, in the process, fundamentally modify system performance and cost characteristics.”

While machine learning is certainly not a silver bullet that will solve all security challenges, there’s no doubt it will provide better information to help humans make better decisions. Let’s stop asking people to do the impossible and let machine intelligence step in to help get the job done.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

While businesses plan to increase IT hiring in 2015, it may be easier said than done, especially when it comes to hiring software developers.

The good news is that more businesses are planning to boost their IT hiring in 2015. The bad news? Many are struggling to find talent to fill vacant or newly created roles, especially for software developers and data analytics pros, according to a recent survey from HackerRank, which matches IT talent with hiring companies using custom coding challenges.

In a survey of current and potential customers performed in March, HackerRank asked 1,300 hiring managers about their hiring outlook for the coming year, their hiring practices and the challenges they faced in filling open positions. Of those who responded to the survey, 76 percent say they planned to fill more technical roles in the remainder of 2015 than they did in 2014.
Theory vs. practice

But intending to fill open positions and actually filling them are two different things, as the survey results show. While 94 percent of respondents to the survey say they’re hiring Java developers and 68 percent are hiring for user interface/user experience (UI/UX) designers, 41 percent also claim these roles are difficult to fill.

“That number was the most surprising when we looked at the results. We knew it was going to be a significant percentage, but it seems customers are really struggling to fill these software development roles,” says Vivek Ravinskar, co-founder and CEO of HackerRank.
Java continues to dominate

The survey also revealed that Java continues to be the dominant language sought by hiring managers and recruiters. Of the survey respondents, 69 percent say Java is the most important skill candidates can have.

“Many of our customers are involved in Web-based business or in developing apps. And Java is instrumental for both of these business pursuits — we absolutely expected to hear this from the survey, and we weren’t surprised,” says Ravinskar.
What makes these positions so difficult to fill?

Part of the problem may lie with candidates’ perceptions of a company’s brand, says Tejal Parekh, HackerRank’s vice president of marketing. “We work with a lot of customers in areas that aren’t typically thought of as technology hotspots. For instance, in the finance sector we have customers facing a dearth of IT talent; they’re all innovative companies with a strong technology focus, but candidates don’t see them as such. They want to go to Facebook or Amazon,” says Parekh.

Another challenge lies with the expectations hiring companies have of their candidate pool, says Ravinskar. “There’s also an unconscious bias issue with customers who sometimes limit themselves by not looking outside the traditional IT talent pool. They’re only considering white, male talent from specific schools or specific geographic areas,” says Ravinskar.
Up the ante

As demand for IT talent increases, so do IT salaries. According to the survey, 67 percent of hiring managers say that salaries for technical positions have increased between 2014 and 2015 while 32 percent say they have stayed the same. Overall, HackerRank’s survey highlights the great opportunities available for software development talent and for the companies vying to hire them.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Microsoft released eight security bulletins, two rated critical, but four address remote code execution vulnerabilities that an attacker could exploit to take control of a victim’s machine.

For June 2015 “Update Tuesday,” Microsoft released 8 security bulletins; only two the security updates are rated critical for resolving remote code execution (RCE) flaws, but two patches rated important also address RCE vulnerabilities.

Rated as Critical
MS15-056 is a cumulative security update for Internet Explorer, which fixes 24 vulnerabilities. Qualys CTO Wolfgang Kandek added, “This includes 20 critical flaws that can lead to RCE which an attacker would trigger through a malicious webpage. All versions of IE and Windows are affected. Patch this first and fast.”

Microsoft said the patch resolves vulnerabilities by “preventing browser histories from being accessed by a malicious site; adding additional permission validations to Internet Explorer; and modifying how Internet Explorer handles objects in memory.”

MS15-057 fixes a hole in Windows that could allow remote code execution if Windows Media Player opens specially crafted media content that is hosted on a malicious site. An attacker could exploit this vulnerability to “take complete control of an affected system remotely.”

Rated as Important
MS15-058 is not listed other than a placeholder, but MS15-059 and MS15-060 both address remote code execution flaws.

MS15-059 addresses RCE vulnerabilities in Microsoft Office. Although it’s rated important for Microsoft Office 2010 and 2013, Microsoft Office Compatibility Pack Service Pack 3 and Microsoft Office 2013 RT, Kandek said it should be your second patching priority. If an attacker could convince a user to open a malicious file with Word or any other Office tool, then he or she could take control of a user’s machine. “The fact that one can achieve RCE, plus the ease with which an attacker can convince the target to open an attached file through social engineering, make this a high-risk vulnerability.”

MS15-060 resolves a vulnerability in Microsoft Windows “common controls.” The vulnerability “could allow remote code execution if a user clicks a specially crafted link, or a link to specially crafted content, and then invokes F12 Developer Tools in Internet Explorer.” Kandek explained, “MS15-060 is a vulnerability in the common controls of Windows which is accessible through Internet Explorer Developer Menu. An attack needs to trigger this menu to be successful. Turning off developer mode in Internet Explorer (why is it on by default?) is a listed work-around and is a good defense in depth measure that you should take a look at for your machines.”

The last four patches Microsoft issued address elevation of privilege vulnerabilities.

MS15-061 resolves bugs in Microsoft Windows kernel-mode drivers. “The most severe of these vulnerabilities could allow elevation of privilege if an attacker logs on to the system and runs a specially crafted application. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.”

MS15-062 addresses a security hole in Microsoft Active Directory Federal Services. Microsoft said, “The vulnerability could allow elevation of privilege if an attacker submits a specially crafted URL to a target site. Due to the vulnerability, in specific situations specially crafted script is not properly sanitized, which subsequently could lead to an attacker-supplied script being run in the security context of a user who views the malicious content. For cross-site scripting attacks, this vulnerability requires that a user be visiting a compromised site for any malicious action to occur.”

MS15-063 is another patch for Windows kernel that could allow EoP “if an attacker places a malicious .dll file in a local directory on the machine or on a network share. An attacker would then have to wait for a user to run a program that can load a malicious .dll file, resulting in elevation of privilege. However, in all cases an attacker would have no way to force a user to visit such a network share or website.”

MS15-064 resolves vulnerabilities in Microsoft Exchange Server by “modifying how Exchange web applications manage same-origin policy; modifying how Exchange web applications manage user session authentication; and correcting how Exchange web applications sanitize HTML strings.”

It would be wise to patch Adobe Flash while you are at it as four of 13 vulnerabilities patched are rated critical.

Happy patching!


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Google developer advocate Colt McAnlis said that Android apps, almost across the board, are not architected correctly for the best networking performance, during a talk he gave Friday at Google’s I/O developer conference in San Francisco.

“Networking performance is one of the most important things that every one of your apps does wrong,” he told the crowd.

+ ALSO ON NETWORK WORLD: Enterprise tech a no-show at Google I/O + Google hypes Android M, Android Pay, Google Photos at I/O 2015 +

By structuring the way apps access the network inefficiently, McAnlis said, developers are imposing needless costs in terms of performance and battery life – costs for which their users are on the hook.

“Bad networking costs your customers money,” he said. “Every rogue request you make, every out-of-sync packet every two-bit image you request, the user has to pay for. Imagine if I went out and told them that.”

The key to fixing the problem? Use the radio less, and don’t move so much data around, McAnlis said.

One way to do this is batching, he said – architecting an app such that lower-priority data is sent when a device’s networking hardware has been activated by something else, minimizing the amount of time and energy used by the radio.

Pre-fetching data is another important technique for smoothing out network usage by Android apps, he said.

“If you can somehow sense that you’re going to make six or seven requests in the future, don’t wait for the device to go to sleep and then wake it up again – take advantage of the fact that the chip is awake right now, and make the requests right now,” McAnlis said.

He also urged developers to use Google Cloud Messaging, rather than relying on server polling for updates.

“Polling the server is horrible. … It is a waste of the user’s time,” McAnlis said. “Think about this: Every time you poll the server and it comes back with a null packet, telling you that there’s no new data, the user’s paying for that.”


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

6 IT leaders share tips to drive collaboration

Collaboration tools are destined to fail when IT leaders look to solve problems that don’t exist. Here’s how CIOs and IT managers ensure their collaborative platform efforts aren’t futile.

Driving enterprise collaboration is a tall order for CIOs and other IT leaders. The challenges don’t end after a new tool is implemented. If not done the right way for the right reasons, the headaches of deploying a collaboration platform can fester well beyond the technical hurdles.

The first thing to remember is that collaboration tool adoption in the enterprise is a journey, John Abel, senior vice president of IT at Hitachi Data Systems, told CIO.com in an email interview.

“It has to be appealing or provide a value or information where employees find it more difficult to access on other platforms,” Abel says.

Collaboration projects are almost destined to get bogged down when IT leaders pursue solutions to problems that don’t exist. So how can CIOs ensure success?

Empower employees and respect their needs

IT leaders should get insights into the tools employees already use and make sure they are personally invested in the selection process, Brian Lozada, director and CISO at the job placement firm Abacus Group, told CIO.com.

When employees are empowered, they are more likely to use and generate excitement for new collaboration tools internally, Lozada says. Employees ultimately contribute to and determine the success of most collaboration efforts.

It’s also important to acknowledge what success in enterprise collaboration looks like. This is particularly important when employees use collaboration tools to get work done more effectively due to collaboration software, says NetScout’s CIO and Senior Vice President of Services Ken Boyd. “Freedom and flexibility are paramount to how most users want to work today.”

The less training required the better because tools that are more intuitive tend to deliver greater benefits for the organization and user.

“Faster employee engagement of a collaboration tool comes by addressing a pain point in a communication or productivity area, and showing how the tool, with a simple click, provides better or instant access to colleagues and information, shaves seconds or minutes off schedules, or provides greater visibility into a team project,” Boyd says.

Presenting the business benefit of integrating a faster and more widespread adoption of collaboration tools can be a strong motivator for many department heads as well, Boyd says.

User experience is a critical component of any tool and its chances for success, according to Shamlan Siddiqi, vice president of architecture and application development at the systems integrator NTT Data.“Users want something they can quickly deploy with the same immersive and collaborative experience that they get when using collaboration tools at home,” he says.

Gamification is a leading trigger for adoption

“Employee engagement techniques such as gamification and game-design principles help create incentives for users to engage and embrace tools more effectively,” says Siddiqi, adding that NTT Data has seen significant increases in collaborative tool engagement internally through the introduction of gamification.

Chris McKewon, founder and CEO of the managed services provider Xceptional Networks, agrees that gamification is the best way to encourage employees to use new tools.

“Gamification provides incentives for embracing the technology and to demonstrate how much more real work they can get done with these tools by selling the concepts in with benefits, not on features,” McKewon told CIO.com in an email interview.
Collaboration and the art of seduction

Ruven Gotz, director of collaboration services at the IT solutions vendor Avanade, says his team drives adoption by seduction.

“Our goal is to create collaboration experiences that users clearly recognize as the superior means to achieve the results they seek,” Gotz says.

When CIOs and IT leaders get enterprise collaboration right, there’s no need to drive adoption, Gotz says, because “employees recognize that we have provided a better working flow and will abandon other alternatives.”


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

QUESTION 1
You are using SQL Server Management Studio (SSMS) to configure the backup for ABC
Solutions. You need to meet the technical requirements.
Which two backup options should you configure? (Choose two).

A. Enable encryption of the backup file.
B. Enable compression of the backup file.
C. Disable encryption of the backup file.
D. Disable compression of the backup file.

Answer: B,C

Explanation:


QUESTION 2
You need to convert the Production, Sales, Customers and Human Resources databases to
tabular BI Semantic Models (BISMs).
Which two of the following actions should you perform? (Choose two)

A. You should select the tabular mode option when upgrading the databases using the Database
Synchronization Wizard.
B. You should select the tabular mode destination option when copying the databases using SQL
Server Integration Services (SSIS).
C. You should select the tabular mode option during the installation of SQL Server Analysis
Services.
D. You should redevelop the projects and deploy them using SQL Server Data Tools (SSDT).

Answer: A,D

Explanation:


QUESTION 3
ABC users report that they are not receiving report subscriptions from SQLReporting01.
You confirm that the report subscriptions are not being delivered.
Which of the following actions should you perform to resolve the issue?

A. You should run the SQL Server 2012 Setup executable on SQLReporting01 to generate a
configuration file.
B. You should reset the password of the SQL Server Service account.
C. You should manually fail over the SSAS cluster.
D. You should restore the ReportServer database on SQLReporting01.

Answer: C

Explanation:


QUESTION 4
ABC users report that they are not receiving report subscriptions from SQLReporting01.
You confirm that the report subscriptions are not being delivered.
Which of the following actions should you perform to resolve the issue?

A. You should run the SQL Server 2012 Upgrade Wizard to upgrade the active node of the
SSAS cluster.
B. You should start the SQL Server Agent on the active node of the SSAS cluster.
C. You should restore the ReportServerTempDB database on SQLReporting01.
D. You should start the SQL Server Agent on SQLReporting01.

Answer: D

Explanation:


QUESTION 5
You need to make the SSAS databases available on SSAS2012 to enable testing from client
applications. Your solution must minimize server downtime and maximize database
availability.
What should you do?

A. You should detach the databases from the SSAS cluster by using SQL Server Management
Studio (SSMS) then attach the databases on SSAS2012.
B. You should copy the database files from the SSAS cluster to SSAS2012.
C. You should export the databases from the SSAS cluster by using SQL Server Management
Studio (SSMS) then import the databases on SSAS2012.
D. You should restore a copy of the databases from the most recent backup.

Answer: D

Explanation:


MCTS Training, MCITP Trainnig

Best Microsoft MCSA: SQL Server 2012 Certification, Microsoft 70-467 Training at certkingdom.com

Open-source software projects are often well intended, but security can take a back seat to making the code work.

OpenDaylight, the multivendor software-defined networking (SDN) project, learned that the hard way last August after a critical vulnerability was found in its platform.

It took until December for the flaw, called Netdump, to get patched, a gap in time exacerbated by the fact that the project didn’t yet have a dedicated security team. After he tried and failed to get in touch with OpenDaylight, the finder of the vulnerability, Gregory Pickett, posted it on Bugtraq, a popular mailing list for security flaws.

INSIDER: 5 ways to prepare for Internet of Things security threats

Although OpenDaylight is still in the early stages and generally isn’t used in production environments, the situation highlighted the need to put a security response process in place.

“It’s actually a surprisingly common problem with open-source projects,” said David Jorm, a product security engineer with IIX who formed OpenDaylight’s security response team. “If there are not people with a strong security background, it’s very common that they won’t think about providing a mechanism for reporting vulnerabilities.”

The OpenDaylight project was launched in April 2013 and is supported by vendors including Cisco Systems, IBM, Microsoft, Ericsson and VMware. The aim is to develop networking products that remove some of the manual fiddling that administrators still need to do with controllers and switches.

Having a common foundation for those products would help with compatibility, as enterprises often use a variety of networking equipment from many vendors.

Security will be an integral component of SDN, since a flaw could have devastating consequences. By compromising an SDN controller—a critical component that tells switches how data packets should be forwarded—an attacker would have control over the entire network, Jorm said.

“It’s a really high value target to go after,” Jorm said.
The Netdump flaw kicked OpenDaylight into action, and now there is a security team in place from a range of vendors who represent different projects within OpenDaylight, Jorm said.

OpenDaylight’s technical steering committee also recently approved a detailed security response process modeled on one used by the OpenStack Foundation, Jorm said.

If a vulnerability is reported privately and not publicly disclosed, some OpenDaylight stakeholders—even those who do not have a member on the security team—will get pre-notification so they have a chance to develop a patch, Jorm said. That kind of disclosure is rare, though it is becoming more common with open-source projects.

The idea is that once a flaw is disclosed, vendors will generally be on the same page and release a patch around the same time, Jorm said.

OpenDaylight’s security response process is “quite well ironed out now,” Jorm said.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

It’s not often that a great product becomes even greater …

The Raspberry Pi 2 Model B, available from Element 14, was recently released and it’s a serious step up from its predecessors. Before we dive in to what makes it an outstanding product, the Raspberry Pi family tree going from oldest to newest, is as follows:

Raspberry Pi B
Raspberry Pi A
Raspberry Pi B+
Raspberry Pi A+
Raspberry Pi 2 Model B

The + models were upgrades of the previous board versions and the RPi2B is the Raspberry Pi B+’s direct descendent with added muscle. So, what makes the Raspberry Pi 2 Model B great?

The Raspberry Pi 2 Model B has a 40 pin GPIO header as did the A+ and B+ and the first 26 pins are identical to the A and B models making the new board a drop-in upgrade for most projects. The new board also supports all of the expansion (HAT) boards used by the previous models.
The Raspberry Pi 2 Model B has an identical board layout and footprint as the B+, so all cases and 3rd party add-on boards designed for the B+ will be fully compatible.
In common with the B+ the Raspberry Pi 2 Model B has 4 USB 2.0 ports (compared to 2 USB ports on the A, A+, and B models) that can provide up to 1.2 Amps for the more power hungry USB devices (this feature does, however, require a 2 Amp power supply).

The Raspberry Pi 2 Model B video output is via a full-sized HDMI (rev 1.3 & 1.4) port with 14 HDMI resolutions from 640×350 to 1920×1200 with digital audio (there’s also composite video output; see below).

The A, A+, and B models use linear power regulators while the B+ and the Raspberry Pi 2 Model B have switching regulators which reduce power consumption by between 0.5W and 1W.
In common with the B+, the Raspberry Pi 2 Model B’s audio circuit has a dedicated low-noise power supply for better audio quality and analog stereo audio is output on the four pole 3.5mm jack it shares with composite video (PAL and NTSC) output.

The previous top of the line B+ model had 512MB of RAM while the new Raspberry Pi 2 Model B now has 1GB making it possible to run larger applications and more complex operating system environments.

The previous Raspberry Pi models used a 700 MHz single-core ARM1176JZF-S processor while the Raspberry Pi 2 Model B has upped the ante to a 900 MHz quad-core ARM Cortex-A7, a considerably faster CPU. The result is performance that’s roughly 6 times better! The advantages of upgrading existing projects to the Raspberry Pi 2 Model B are huge.

Not only will the Raspberry Pi 2 Model B run all of the operating systems its predecessors ran, it will also be able to run Microsoft’s Windows 10 … for free! Yep, Microsoft has decided that it wants to be part of the Raspberry Pi world and for a good reason; a huge number of kids will have their first experience of computing on RPi boards and what better way to gain new acolytes?

This may be the best improvement of the lot: For the added compute power, increased RAM, and drop-in compatibility there’s no extra cost! The Raspberry Pi 2 Model B is priced at $35, the same as its predecessor!

The Raspberry Pi 2 Model B is one of the best (quite possibly, *the* best) single board computers available and, given the huge popularity of the Raspberry Pi family (now with more than 500,000 Raspberry Pi 2 Model B’s sold and around 5 million Pi’s in total if you include all models), it’s one of the best understood and supported products of its kind. Whether it’s for hobbyist, educational, or commercial use, the Raspberry Pi 2 Model B is an outstanding product.


 

 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

AIIM group finds Microsoft’s Yammer social tool slow to catch on as well, though IT shops hopeful about product roadmap

Many SharePoint installations at enterprises have been doomed largely due to senior management failing to really get behind the Microsoft collaboration technology, according to a new study by AIIM, which bills itself as “the Global Community of IT Professionals.”

The AIIM (Association for Information and Image Management) Web-based survey of 409 member organizations found that nearly two-thirds described their SharePoint projects as either stalled (26%) or not meeting original expectations (37%).

RELATED: 12 Key Strategies for Unlocking the Secrets of SharePoint User Adoption
The associated Yammer social business tool has also been slow to catch on, with only about 1 in 5 organizations using it, and only 10% of them using it regularly and on a widespread basis (Disclosure: I use it a bit here and there at IDG Enterprise!). Many organizations aren’t specifically biased against Yammer though — 4 in 10 say they don’t use any such tool.
Microsoft yammer iPad app Microsoft

Reasons cited for tepid uptake of SharePoint and Yammer include inadequate user training and investment.

“Enterprises have it, but workers are simply not engaging with SharePoint in a committed way,” said Doug Miles, AIIM director of market intelligence, in a statement. “It remains an investment priority however, and the C-suite must get behind it more fully than they are currently if they are to realize a return on that investment.”

Miles says it shouldn’t be up to IT departments to push SharePoint within organizations, but rather, business lines should take the lead.

The study showed that 75% of respondents still feel strongly about making SharePoint work at their organizations. The cloud-based Office 365 version has shown good signs of life, and 43% of respondents indicated faith in Microsoft’s product roadmap for its collaboration tools, according to the AIIM report.

Half of respondents expressed concern about a lack of focus by Microsoft on the on-premise version of SharePoint. That’s an issue that market watcher Gartner stressed last year could make SharePoint a lot less useful for organizations counting on it for customer-facing and content marketing applications.

You can get a free full version of the AIIM study, ‘Connecting and Optimizing SharePoint’, by filling out a registration form.

The research was underwritten in part by ASG, AvePoint, Colligo, Concept Searching, Collabware, EMC, Gimmal Group, K2 and OpenText. While Microsoft is a member of AIIM’s Executive Leadership Council, it is not listed as one of the funders for this study.

A Microsoft representative is looking into our request for comment on the report.


 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

The best office apps for Android

Written by admin
January 19th, 2015

Which office package provides the best productivity experience on Android? We put the leading contenders to the test

Getting serious about mobile productivity
We live in an increasingly mobile world — and while many of us spend our days working on traditional desktops or laptops, we also frequently find ourselves on the road and relying on tablets or smartphones to stay connected and get work done.

Where do you turn when it’s time for serious productivity on an Android device? The Google Play Store boasts several popular office suite options; at a glance, they all look fairly comparable. But don’t be fooled: All Android office apps are not created equal.

I spent some time testing the five most noteworthy Android office suites to see where they shine and where they fall short. I looked at how each app handles word processing, spreadsheet editing, and presentation editing — both in terms of the features each app offers and regarding user interface and experience. I took both tablet and smartphone performance into consideration.

Click through for a detailed analysis; by the time you’re done, you’ll have a crystal-clear idea of which Android office suite is right for you.

Best Android word processor: OfficeSuite 8 Premium
Mobile Systems’ OfficeSuite 8 Premium offers desktop-class word processing that no competitor comes close to matching. The UI is clean, easy to use, and intelligently designed to expand to a tablet-optimized setup. Its robust set of editing tools is organized into easily accessible on-screen tabs on a tablet (and condensed into drop-down menus on a phone). OfficeSuite 8 Premium provides practically everything you need, from basic formatting to advanced table creation and manipulation utilities. You can insert images, shapes, and freehand drawings; add and view comments; track, accept, and reject changes; spell-check; and calculate word counts. There’s even a native PDF markup utility, PDF export, and the ability to print to a cloud-connected printer.

OfficeSuite 8 Premium works with locally stored Word-formatted files and connects directly to cloud accounts, enabling you to view and edit documents without having to download or manually sync your work.

Purchasing OfficeSuite 8 Premium is another matter. Search the Play Store, and you’ll find three offerings from Mobile Systems: a free app, OfficeSuite 8 + PDF Converter; a $14.99 app, OfficeSuite 8 Pro + PDF; and another free app, OfficeSuite 8 Pro (Trial). The company also offers a dizzying array of add-ons that range in price from free to $20.

The version reviewed here — and the one most business users will want — is accessible only by downloading the free OfficeSuite 8 + PDF Converter app and following the link on the app’s main screen to upgrade to Premium, which requires a one-time $19.99 in-app purchase that unlocks all possible options, giving you the most fully featured setup, no further purchases required.

App: OfficeSuite 8 Premium
Price: $19.99 (via in-app upgrade)
Developer: Mobile Systems

Runner-up Android word processor: Google Docs
Google’s mobile editing suite has come a long way, thanks largely to its integration of Quickoffice, which Google acquired in 2012. With the help of Quickoffice technology, the Google Docs word processor has matured into a usable tool for folks with basic editing needs.

Docs is nowhere near as robust as OfficeSuite 8 Premium, but if you rely mainly on Google’s cloud storage or want to do simple on-the-go writing or editing, it’s light, free, and decent enough to get the job done, whether you’re targeting locally stored files saved in standard Word formats or files stored within Docs in Google’s proprietary format.

Docs’ clean, minimalist interface follows Google’s Material Design motif, making it pleasant to use. It offers basic formatting (fonts, lists, alignment) and tools for inserting and manipulating images and tables. The app’s spell-check function is limited to identifying misspelled words by underlining them within the text; there’s no way to perform a manual search or to receive proper spelling suggestions.

Google Docs’ greatest strength is in its cross-device synchronization and collaboration potential: With cloud-based documents, the app syncs changes instantly and automatically as you work. You can work on a document simultaneously from your phone, tablet, or computer, and the edits and additions show up simultaneously on all devices. You can also invite other users into the real-time editing process and keep in contact with them via in-document commenting.

App: Google Docs
Price: Free
Developer: Google

The rest of the Android word processors
Infraware’s Polaris Office is a decent word processor held back by pesky UI quirks and an off-putting sales approach. The app was clearly created for smartphones; as a result, it delivers a subpar tablet experience with basic commands tucked away and features like table creation stuffed into short windows that require awkward scrolling to see all the content. Polaris also requires you to create an account before using the app and pushes its $40-a-year membership fee to gain access to a few extras and the company’s superfluous cloud storage service.

Kingsoft’s free WPS Mobile Office (formerly Kingsoft Office) has a decent UI but is slow to open files and makes it difficult to find documents stored on your device. I also found it somewhat buggy and inconsistent: When attempting to edit existing Word (.docx) documents, for instance, I often couldn’t get the virtual keyboard to load, rendering the app useless. (I experienced this on multiple devices, so it wasn’t specific to any one phone or tablet.)

DataViz’s Docs to Go (formerly Documents to Go) has a dated, inefficient UI, with basic commands buried behind layers of pop-up menus and a design reminiscent of Android’s 2010 Gingerbread era. While it offers a reasonable set of features, it lacks functionality like image insertion and spell check; also, it’s difficult to find and open locally stored documents. It also requires a $14.99 Premium Key to remove ads peppered throughout the program and to gain access to any cloud storage capabilities.

Best Android spreadsheet editor: OfficeSuite 8 Premium
With its outstanding user interface and comprehensive range of features, OfficeSuite 8 Premium stands out above the rest in the realm of spreadsheets. Like its word processor, the app’s spreadsheet editor is clean, easy to use, and fully adaptive to the tablet form.

It’s fully featured, too, with all the mathematical functions you’d expect organized into intuitive categories and easily accessible via a prominent dedicated on-screen button. Other commands are broken down into standard top-of-screen tabs on a tablet or are condensed into a drop-down menu on a smartphone.

With advanced formatting options to multiple sheet support, wireless printing, and PDF exporting, there’s little lacking in this well-rounded setup. And as mentioned above, OfficeSuite offers a large list of cloud storage options that you can connect with to keep your work synced across multiple devices.

App: OfficeSuite 8 Premium
Price: $19.99 (via in-app upgrade)
Developer: Mobile Systems

Runner-up Android spreadsheet editor: Polaris Office
Polaris Office still suffers from a subpar, non-tablet-optimized UI, but after OfficeSuite Premium 8, it’s the next best option.

Design aside, the Polaris Office spreadsheet editor offers a commendable set of features, including support for multiple sheets and easy access to a full array of mathematical functions. The touch targets are bewilderingly small, which is frustrating for a device that’s controlled by fingers, but most options you’d want are all there, even if not ideally presented or easily accessible.

Be warned that the editor has a quirk: You sometimes have to switch from “view” mode to “edit” mode before you can make changes to a sheet — not entirely apparent when you first open a file. Be ready to be annoyed by the required account creation and subsequent attempts to get you to sign up for an unnecessary paid annual subscription.

Quite honestly, the free version of OfficeSuite would be a preferable alternative for most users; despite its feature limitations compared to the app’s Premium configuration, it still provides a better overall experience than Polaris or any of its competitors. If that doesn’t fit the bill for you, Polaris Office is a distant second that might do the trick.

App: Polaris Office
Price: Free (with optional annual subscription)
Developer: Infraware

The rest of the Android spreadsheet editors
Google Sheets (part of the Google Docs package) lacks too many features to be usable for anything beyond the most basic viewing or tweaking of a simple spreadsheet. The app has a Function command for standard calculations, but it’s hidden and appears in the lower-right corner of the screen inconsistently, rendering it useless most of the time. You can’t sort cells or insert images, and its editing interface adapts poorly to tablets. Its only saving grace is integrated cloud syncing and multiuser/multidevice collaboration.

WPS Mobile Office is similarly mediocre: It’s slow to open files, and its Function command — a vital component of spreadsheet work — is hidden in the middle of an “Insert” menu. On the plus side, it has an impressive range of features and doesn’t seem to suffer from the keyboard bug present in its word-processing counterpart.

Docs to Go is barely in the race. Its embarrassingly dated UI makes no attempt to take advantage of the tablet form. Every command is buried behind multiple layers of pop-up menus, all of which are accessible only via an awkward hamburger icon at the top-right of the screen. The app’s Function command doesn’t even offer descriptions of what the options do — only Excel-style lingo like “ABS,” “ACOS,” and “COUNTIF.” During my testing, the app failed to open some perfectly valid Excel (.xlsx) files I used across all the programs as samples.

Best Android presentation editor: OfficeSuite 8 Premium
OfficeSuite 8 Premium’s intuitive, tablet-optimized UI makes it easy to edit and create presentations on the go. Yet again, it’s the best-in-class contender by a long shot. (Are you starting to sense a pattern here?)

OfficeSuite offers loads of options for making slides look professional, including a variety of templates and a huge selection of slick transitions. It has tools for inserting images, text boxes, shapes, and freehand drawings into your slides, and it supports presenter notes and offers utilities for quickly duplicating or reordering slides. You can export to PDF and print to a cloud-connected printer easily.

If you’re serious about mobile presentation editing, OfficeSuite 8 Premium is the only app you should even consider.

App: OfficeSuite 8 Premium
Price: $19.99 (via in-app upgrade)
Developer: Mobile Systems

Runner-up Android presentation editor: Polaris Office
If it weren’t for the existence of OfficeSuite, Polaris’s presentation editor would look pretty good. The app offers basic templates to get your slides started; they’re far less polished and professional-looking than OfficeSuite’s, but they get the job done.

Refreshingly, the app makes an effort to take advantage of the tablet form in this domain, providing a split view with a rundown of your slides on the left and the current slide in a large panel alongside it. (On a phone, that rundown panel moves to the bottom of the screen and becomes collapsible.)

With Polaris, you can insert images, shapes, tablets, charts, symbols, and text boxes into slides, and drag-and-drop to reorder any slides you’ve created. It offers no way to duplicate an existing slide, however, nor does it sport any transitions to give your presentation pizazz. It also lacks presenter notes.

Most people would get a better overall experience from even the free version of OfficeSuite, but if you want a second option, Polaris is the one.

App: Polaris Office
Price: Free (with optional annual subscription)
Developer: Infraware

The rest of the Android presentation editors
Google Slides (part of the Google Docs package) is bare-bones: You can do basic text editing and formatting, and that’s about it. The app does offer predefined arrangements for text box placement — and includes the ability to view and edit presenter notes — but with no ability to insert images or slide backgrounds and no templates or transitions, it’s impossible to create a presentation that looks like it came from this decade.

WPS Mobile Office is similarly basic, though with a few extra flourishes: The app allows you to insert images, shapes, tables, and charts in addition to plain ol’ text. Like Google Slides, it lacks templates, transitions, and any other advanced tools and isn’t going to create anything that looks polished or professional.

Last but not least, Docs to Go — as you’re probably expecting by this point — borders on unusable. The app’s UI is dated and clunky, and the editor offers practically no tools for modern presentation creation. You can’t insert images or transitions; even basic formatting tools are sparse. Don’t waste your time looking at this app.

Putting it all together
The results are clear: OfficeSuite 8 Premium is by far the best overall office suite on Android today. From its excellent UI to its commendable feature set, the app is in a league of its own. At $19.99, the full version isn’t cheap, but you get what you pay for, which is the best mobile office experience with next to no compromises. The less fully featured OfficeSuite 8 Pro ($9.99) is a worthy one-step-down alternative, as is the basic, ad-supported free version of the main OfficeSuite app.

If basic on-the-go word processing is all you require — and you work primarily with Google services — Google’s free Google Docs may be good enough. The spreadsheet and presentation editors are far less functional, but depending on your needs, they might suffice.

Polaris Office is adequate but unremarkable. The basic program is free, so if you want more functionality than Google’s suite but don’t want to pay for OfficeSuite — or use OfficeSuite’s lower-priced or free offerings — it could be worth considering. But you’ll get a significantly less powerful program and less pleasant overall user experience than what OfficeSuite provides.

WPS Mobile Office is a small but significant step behind, while Docs to Go is far too flawed to be taken seriously as a viable option.

With that, you’re officially armed with all the necessary knowledge to make your decision. Grab the mobile office suite that best suits your needs — and be productive wherever you may go.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

Coming soon: Better geolocation Web data

Written by admin
January 8th, 2015

The W3C and OGC pledge to ease the path for developing location-enriched Web data

From ordering pizza online to pinpointing the exact location of a breaking news story, an overwhelming portion of data on the Web has geographic elements. Yet for Web developers, wrangling the most value from geospatial information remains an arduous task.

Now the standards body for the Web has partnered with the standards body for geographic information systems (GIS) to help make better use of the Web for sharing geospatial data.

Both the World Wide Web Consortium (W3C) and the Open Geospatial Consortium (OGC) have launched working groups devoted to the task. They are pledging to closely coordinate their activities and publish joint recommendations.

Adding geographic elements to data online in a meaningful way “can be done now, but it is difficult to link the two worlds together and to use the infrastructure of the Web effectively alongside the infrastructure of geospatial systems,” said Phil Archer, who is acting as data activity lead for the W3C working group.

A lack of standards is not the problem. “The problem is that there are too many,” he said. With this in mind, the two standards groups are developing a set of recommendations for how to best use existing standards together.

As much as 80 percent of data has some geospatial element to it, IT research firm Gartner has estimated. In the U.S. alone, geospatial services generate approximately $75 billion a year in annual revenue, according to the Boston Consulting Group.

Making use of geospatial data still can be a complex task for the programmer, however. An untold amount of developer time is frittered away trying to understand multiple formats and sussing out the best ways to bridge them together.

For GIS (geographic information system) software, the fundamental units of geospatial surface measurement are the point, line and polygon. Yet, people who want to use geographically enhanced data tend to think about locations in a fuzzier manner.

For instance, say someone wants to find a restaurant in the “Little Italy” section of a city, Archer explained. Because such neighborhoods are informally defined, they don’t have a specific grid of coordinates that could help in generating a definitive set of restaurants in that area.

“That sort of information is hard to get if you don’t have geospatial information and it is also hard to get if you only have geospatial information,” Archer said.

Much of the work the groups will do will be centered around bridging geolocational and non-geolocational data in better ways — work that the two groups agreed needed to be completed at a joint meeting last March in London.

The groups will build on previous research done in the realm of linked open data, an approach of formatting disparate sources of data so they can be easily interlinked.

The groups will also look at ways to better harness emerging standards, notably the W3C’s Semantic Sensor Network ontology and OGC’s GeoSPARQL.

The working groups plan to define their requirements within the next few months, and will issue best practices documents as early as by the end of the year.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

16 of the hottest IT skills for 2015

Written by admin
January 4th, 2015

2015 will bring new opportunities for professional growth and development, not to mention more money. But what specific skills will add the most value for your career advancement in the new year.

The Hottest IT and Tech Skills for 2015
What skills should IT professionals add to their toolbox to increase their compensation in 2015? To find out, CIO.com worked with David Foote, chief analyst and research officer with Foote Partners, to comb through the firm’s quarterly data to uncover what skills will lead to higher pay in the short term and help them navigate the tech industry for next career move in the long term.

Foote Partners uses a proprietary methodology to track and validate compensation data for tech workers. It collects data on 734 individual certified and noncertified IT skills. Of those skills, 384 are of the noncertified variety and the focus on this report.

Cloud Skills
Cloud adoption continues to accelerate as organizations large and small try to capitalize on cloud computing’s cost benefits. In fact, it’s become a mainstream in IT organizations. Cloud adoption among IT departments everywhere is somewhere near 90 percent for 2014. “Companies began discovering the cloud about four years ago and it’s been quite volatile in the last year. Will companies continue to invest in the cloud? The answer is ‘yes,’ ” according to Foote.

Although Foote Partners has found a 3 percent to a 3.5 percent drop in market value, Foote notes it’s an area with some unpredictability but it’s cyclical. “It’s a volatile marketplace when it comes to talent,” he says.

Architecture
Foote points out that as organizational complexity is increasing, businesses are becoming more aware of the value of a great architect and these roles are showing up with more frequency among his clients. The Open Group Architecture Framework (TOGAF) skills, in particular, are the most highly paid noncertified IT skill and a regular on the hot skills lists.

“We know a lot of companies are getting into architecture in a bigger way. They’re hiring more architects; they’re restructuring their enterprise architect departments. Their starting to see a lot of value and no one is really debating that you can never have too many talented architects in your business. This is not something you can ignore. Everyone is thinking that no matter what we do today, we have to always be thinking down the road — three years, five years or more. The people that do that for a living are architects,” says Foote.

Database/Big Data Skills
Big data is attractive to organizations for a number of reasons. Unfortunately, many of those reasons haven’t panned out. According to Foote, companies got caught up in the buzz and now they are taking a more conservative approach. That said, this is an area that Foote Partners expects to grow in 2015. Adding any of these skills to your skillset will make you more valuable to any employer looking to capitalize on the promise of big data.

Although it just missed their highest paying noncertified IT skills list, pay for data sciences skills are expected to increase into 2015. “This group [of skills] is in transition. There is still a big buzz factor around data sciences which will result in companies paying more for this skill, “says Foote.

Data management will increasingly be important as companies try to wrangle actionable data from their many disparate sources of data.

Applications Development Skills
Applications development is undoubtedly a hot skills area. Demand for both mobile and desktop developers continues to increase and this trend will continue well into 2015. However, Foote Partners data suggests that the three skills listed here are poised for significant growth in the coming year. It’s worth noting that JavaFX and user interface/experience design skills also made Foote Partners list of highest paying noncertified IT skills.

Organizations are more regularly refining their digital customer experience, making user interface and experience design crucial skills in the coming year.

JavaFX is coming on strong as it replaces Swing in the marketplace.

Agile programming is new to the noncertified IT skills list, but Foote predicts pay premium for this area to grow into 2015.

SAP and Enterprise Business Applications Skills
SAP is a global organization related to ERP applications ranging from business operations to CRM. Foote partners tracks nearly 93 SAP modules and have noticed a lot of fluctuation in value over the last year among these modules. However, according to Foote Partners data, SAP CO-PA, SAP FI-FSCM, SAP GTS and SAP SEM are all expected to be hot in 2015.

Security Skills
Security has come to the forefront in 2014 with organizations large and small being targeted by cybercriminals. The list of businesses attacked is long but includes some heavyweights like Sony, eBay and Target to name a few. Foote points out that cybersecurity is now part of today’s lexicon to both techies and consumers alike.

“Security is blown wide open. Cybersecurity has now become an issue that everyone sees as important. Inside cybersecurity skills and certifications there is a lot of activity. It’s gone mainstream. I think you’re going to see cybersecurity on this list for some time to come,” says Foote.

Management, Process and Methodology Skills
Project and program management are new to the list, but Foote Partners predict this area to be in high demand in 2015.

Foote emphasizes that fluctuations in pay premiums don’t tell the whole story. They also apply what they have learned from the data provided from the 2,648 employers that they work with. That’s why you may have noticed that some skills covered appear flat. Some of these make the list of hot skills because Foote Partners has uncovered some data or trend that will likely drive up pay in these areas in 2015.

“There is more than recent pay premium track record considered in our forecast list. We talk to a lot of people in the field making decisions about skills acquisition at their companies. We look at tech evolution and where we think skills consumption is heading and so forth,” says Foote.


 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

<