Archive for the ‘ Tech ’ Category


SDN will support IoT by centralizing control, abstracting network devices, and providing flexible, dynamic, automated reconfiguration of the network

Organizations are excited about the business value of the data that will be generated by the Internet of Things (IoT). But there’s less discussion about how to manage the devices that will make up the network, secure the data they generate and analyze it quickly enough to deliver the insights businesses need.

Software defined networking (SDN) can help meet these needs. By virtualizing network components and services, they can rapidly and automatically reconfigure network devices, reroute traffic and apply authentication and access rules. All this can help speed and secure data delivery, and improve network management, for even the most remote devices.

SDN enables the radical simplification of network provisioning with predefined policies for plug-and-play set-up of IoT devices, automatic detection and remediation of security threats, and the provisioning of the edge computing and analytics environments that turn data into insights.

Consider these two IoT use cases:
* Data from sensors within blowout preventers can help oil well operators save millions of dollars a year in unplanned downtime. These massive data flows, ranging from pressure readings to valve positions, are now often sent from remote locations to central servers over satellite links. This not only increases the cost of data transmission but delays its receipt and analysis. This latency can be critical – or even deadly – when the data is used to control powerful equipment or sensitive industrial processes.

Both these problems will intensify as falling prices lead to the deployment of many more sensors, and technical advances allow each sensor to generate much more data. Processing more data at the edge (i.e. near the well) and determining which is worth sending to a central location (what some call Fog or Edge Computing) helps alleviate both these problems. So can the rapid provisioning of network components and services, while real-time application of security rules helps protect proprietary information.

* Data from retail environments, such as from a customer’s smartphone monitoring their location and the products they take pictures of, or in-store sensors monitoring their browsing behavior, can be used to deliver customized offers to encourage an immediate sale. Again, the volume of data and the need for fast analysis and action calls for the rapid provisioning of services and edge data processing, along with rigorous security to ease privacy concerns.

Making such scenarios real requires overcoming unprecedented challenges.
One is the sheer number of devices, which is estimated to reach 50 billion by 2020, with each new device expanding the “attack surface” exposed to hackers. Another is the amount of data moving over this network, with IDC projecting IoT will account for 10% of all data on the planet by 2020.

Then there is the variety of devices that need to be managed and supported. These range from network switches supporting popular management applications and protocols, to legacy SCADA (supervisory control and data acquisition) devices and those that lack the compute and/or memory to support standard authentication or encryption. Finally, there is the need for very rapid, and even real-time, response, especially for applications involving safety (such as hazardous industrial processes) or commerce (such as monitoring of inventory or customer behavior).

Given this complexity and scale, manual network management is simply not feasible. SDN provides the only viable, cost-effective means to manage the IoT, secure the network and the data on it, minimize bandwidth requirements and maximize the performance of the applications and analytics that use its data.

SDN brings three important capabilities to IoT:

Centralization of control through software that has complete knowledge of the network, enabling automated, policy-based control of even massive, complex networks. Given the huge potential scale of IoT environments, SDN is critical in making them simple to manage.

Abstraction of the details of the many devices and protocols in the network, allowing IoT applications to access data, enable analytics and control the devices, and add new sensors and network control devices, without exposing the details of the underlying infrastructure. SDN simplifies the creation, deployment and ongoing management of the IoT devices and the applications that benefit from them.

The flexibility to tune the components within the IoT (and manage where data is stored and analyzed) to continually maximize performance and security as business needs and data flows change. IoT environments are inherently disperse with many end devices and edge computing. As a result, the network is even more critical than in standard application environments. SDN’s ability to dynamically change network behavior based on new traffic patterns, security incidents andpolicy changes will enable IoT environments to deliver on their promise.

For example, through the use ofpredefined policies for plug-and-play set up, SDN allows for the rapid and easy addition of new types of IoT sensors. By abstracting network services from the hardware on which they run, SDN allows automated, policy-based creation of virtual load balancers, quality of service for various classes of traffic, and the provisioning of network resources for peak demands.

The ease of adding and removing resources reduces the cost and risk of IoT experiments by allowing the easy deprovisioning and reuse of the network infrastructure when no longer needed.

SDN will make it easier to find and fight security threats through the improved visibility they provide into network traffic right to the edge of the network. They also make it easy to apply automated policies to redirect suspicious traffic to, for example, a honeynet where it can be safely examined. By making networking management less complex, SDN allows IT to set and enforce more segmented access controls.

SDN can provide a dynamic, intelligent, self-learning layered model of security that provides walls within walls and ensures people can only change the configuration of the devices they’re authorized to “touch.” This is far more useful than the traditional “wall” around the perimeter of the network, which won’t work with the IoT because of its size and the fact the enemy is often inside the firewall, in the form of unauthorized actors updating firmware on unprotected devices.

Finally, by centralizing configuration and management, SDN will allow IT to effectively program the network to make automatic, real-time decisions about traffic flow. They will allow the analysis of not only sensor data, but data about the health of the network, to be analyzed close to the network edge to give IT the information it needs to prevent traffic jams and security risks. The centralized configuration and management of the network, and the abstraction of network devices, also makes it far easier to manage applications that run on the edge of the IoT.

For example, SDN will allow IT to fine-tune data aggregation, so data that is less critical is held at the edge and not transmitted to core systems until it won’t slow critical application traffic. This edge computing can also perform fast, local analysis and speed the results to the network core if the analysis indicates an urgent situation, such as the impending failure of a jet engine.

Prepare Now

IT organizations can become key drivers in capturing the promised business value of IoT through the use of SDNs. But this new world is a major change and will require some planning.

To prepare for the intersection of IoT and SDN, you should start thinking about what policies in areas such as security, Quality of Service (QoS) and data privacy will make sense in the IoT world, and how to structure and implement such policies in a virtualized network.

All companies have policies today, but typically they are implicit – that is – buried in a morass of ACLs and network configurations. SDN will turn this process on its head, allowing IT teams to develop human readable policies that are implemented by the network. IT teams should start understanding how they’ve configured today’s environment so that they can decide what policies should be brought forward.

They should plan now to include edge computing and analytics in their long-term vision of the network. At the same time, they should remember that IoT and SDN are in their early stages, meaning their network and application planners should expect unpredicted changes in, for example, the amounts of data their networks must handle, and the need to dynamically reconfigure them for local rather than centralized processing. The key enablers, again, will be centralization of control, abstraction of network devices and flexible, dynamic automated reconfiguration of the network. Essentially, isolation of network slices to segment the network by proactively pushing policy via a centralized controller to cordon off various types of traffic. Centralized control planes offer the advantages of easy operations and management.

IT teams should also evaluate their network, compute and data needs across the entire IT spectrum, as the IoT will require an end-to-end SDN solution encompassing all manner of devices, not just those from one domain within IT, but across the data center, Wide Area Network (WAN) and access.

Lastly, IT will want to get familiar with app development in edge computing environments, which is a mix of local and centralized processing. As network abstraction to app layer changes and becomes highly programmable, network teams need to invest in resources and training that understand these programming models (e.g. REST) so that they can more easily partner with the app development teams.

IoT will be so big, so varied and so remote that conventional management tools just won’t cut it. Now is the time to start learning how SDN can help you manage this new world and assure the speedy, secure delivery and analysis of the data it will generate.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

In the earliest days of Amazon.com SQL databases weren’t cutting it, so the company created DynamoDB and in doing so helped usher in the NoSQL market

Behind every great ecommerce website is a database, and in the early 2000s Amazon.com’s database was not keeping up with the company’s business.

Part of the problem was that Amazon didn’t have just one database – it relied on a series of them, each with its own responsibility. As the company headed toward becoming a $10 billion business, the number and size of its SQL databases exploded and managing them became more challenging. By the 2004 holiday shopping rush, outages became more common, caused in large part by overloaded SQL databases.

Something needed to change.
But instead of looking for a solution outside the company, Amazon developed its own database management system. It was a whole new kind of database, one that threw out the rules of traditional SQL varieties and was able to scale up and up and up. In 2007 Amazon shared its findings with the world: CTO Werner Vogels and his team released a paper titled “Dynamo – Amazon’s highly available key value store.” Some credit it with being the moment that the NoSQL database market was born.

The problem with SQL
The relational databases that have been around for decades and most commonly use the SQL programming language are ideal for organizing data in neat tables and running queries against them. Their success is undisputed: Gartner estimates the SQL database market to be $30 billion.

But in the early to mid-2000s, companies like Amazon, Yahoo and Google had data demands that SQL databases just didn’t address well. (To throw a bit of computer science at you, the CAP theorem states that it’s impossible for a distributed system, such as a big database, to have consistency, availability and fault tolerance. SQL databases prioritize consistency over speed and flexibility, which makes them great for managing core enterprise data such as financial transactions, but not other types of jobs as well.)

Take Amazon’s online shopping cart service, for example. Customers browse the ecommerce website and put something in their virtual shopping cart where it is saved and potentially purchased later. Amazon needs the data in the shopping cart to always be available to the customer; lost shopping cart data is a lost sale. But, it doesn’t necessarily need every node of the database all around the world to have the most up-to-date shopping cart information for every customer. A SQL/relational system would spend enormous compute resources to make data consistent across the distributed system, instead of ensuring the information is always available and ready to be served to customers.

One of the fundamental tenets of Amazon’s Dynamo, and NoSQL databases in general, is that they sacrifice data consistency for availability. Amazon’s priority is to maintain shopping cart data and to have it served to customers very quickly. Plus, the system has to be able to scale to serve Amazon’s fast-growing demand. Dynamo solves all of these problems: It backs up data across nodes, and can handle tremendous load while maintaining fast and dependable performance.

“It was one of the first NoSQL databases,” explains Khawaja Shams, head of engineering at Amazon DynamoDB. “We traded off consistency and very rigid

querying semantics for predictable performance, durability and scale – those are the things Dynamo was super good at.”

DynamoDB: A database in the cloud
Dynamo fixed many of Amazon’s problems that SQL databases could not. But throughout the mid-to-late 2000s, it still wasn’t perfect. Dynamo boasted the functionality that Amazon engineers needed, but required substantial resources to install and manage.

The introduction of DynamoDB in 2012 proved to be a major upgrade though. The hosted version of the database Amazon uses internally lives in Amazon Web Services’ IaaS cloud and is fully managed. Amazon engineers and AWS customers don’t provision a database or manage storage of the data. All they do is request the throughput they need from DynamoDB. Customers pay $0.0065 per hour for about 36,000 writes to the database (meaning the amount of data imported to the database per hour) plus $0.25 per GB of data stored in the system per month. If the application needs more capacity, then with a few clicks the database spreads the workload over more nodes.

AWS is notoriously opaque about how DynamoDB and many of its other Infrastructure-as-service products run under the covers, but this promotional video reveals that the service employs solid state drives and notes that when customers use DynamoDB, their data is spread across availability zones/data centers to ensure availability.

Forrester principal analyst Noel Yuhanna calls it a “pretty powerful” database and considers it one of the top NoSQL offerings, especially for key-value store use cases.

DynamoDB has grown significantly since its launch. While AWS will not release customer figures, company engineer James Hamilton said in November that DynamoDB has grown 3x in requests it processes annually and 4x in the amount of data it stores compared to the year prior. Even with that massive scale and growth, DynamoDB has consistently returned queries in three to four milliseconds.

Below is a video demonstrating DynamoDB’s remarkably consistent performance even as more stress is put on the system.

To see a demo of DynamoDB, jump to the 16:47 mark in the video.
Feature-wise, DynamoDB has grown, too. NoSQL databases are generally broken into a handful of categories: Key-value store databases organize information with a key and a value; document databases allow full documents to be searched against; while graph databases track connections between data. DynamoDB originally started as a key-value database, but last year AWS expanded itto become a document database by supporting JSON formatted files. AWS last year also added Global Secondary Indexes to DynamoDB, which allow users to have copies of their database, typically one for production and another for querying, analytics or testing.

NoSQL’s use case and vendor landscape
The fundamental advantage of NoSQL databases is their ability to scale and have flexible schema, meaning users can easily change how data is structured and run multiple queries against it. Many new web-based applications, such as social, mobile and gaming-centric ones, are being built using NoSQL databases.

While Amazon may have helped jumpstart the NoSQL market, it is now one of dozens of vendors attempting to cash in on it. Nick Heudecker, a Gartner researcher, stresses that even though NoSQL has captured the attention of many developers, it is still a relatively young technology. He estimates revenues of NoSQL products to not even surpass half a billion dollars annually (that’s not an official Gartner estimate). Heudecker says the majority of his enterprise client inquiries are still around SQL databases.

NoSQL competitors MongoDB, MarkLogic, Couchbase and Datastax have strong standings in the market as well and some seem to have greater traction among enterprise customers compared to DynamoDB, Huedecker says.

Living in the cloud

What’s holding DynamoDB back in the enterprise market? For one, it has no on-premises version – it can only be used in AWS’s cloud. Some users just aren’t comfortable using a cloud-based database, Heudecker says. DynamoDB competitors offer users the opportunity to run databases on their own premises behind their own firewall.

Khawaja Shams, director of engineering for DynamoDB says when the company created Dynamo it had to throw out the old rules of SQL databases.

Shams, AWS’s DynamoDB engineering head, says because the technology is hosted in the cloud, users don’t have to worry about configuring or provisioning any hardware. They just use the service and scale it up or down based on demand, while paying only for storage and throughput, he says.

For security-sensitive customers, there are opportunities to encrypt data as DynamoDB stores it. Plus, DynamoDB is integrated with AWS – the market’s leading IaaS platform (according to Gartner’s Magic Quadrant report), which supports a variety of tools, including other relational databases such as Aurora and RDS.

Adroll rolls with AWS DynamoDB

Marketing platform provider Adroll, which serves more than 20,000 customers in 150 countries, is among those organizations comfortable using the cloud-based DynamoDB. Basically, if an ecommerce site visitor browses a product page but does not buy the item, AdRoll bids on ad space on another site the user visits to show the product they were previously considering. It’s an effective method for getting people to buy products they were considering.

It’s really complicated for AdRoll to figure out which ads to serve to which users though. Even more complicated is that AdRoll needs to decide in about the time it takes for a webpage to load whether it will bid on an ad spot and which ad to place. That’s the job of CTO Valentino Volonghi –he has about 100 milliseconds to play with. Most of that time is gobbled up by network latency, so needless to say AdRoll requires a reliably fast platform. It also needs huge scale: AdRoll considers more than 60 billion ad impressions every day.

AdRoll uses DynamoDB and Amazon’s Simple Storage Service (S3) to sock away data about customers and help its algorithm decide which ads to buy for customers. In 2013, AdRoll had 125 billion items in DynamoDB; it’s now up to half a trillion. It makes 1 million requests to the system each second, and the data is returned in less than 5 milliseconds — every time. AdRoll has another 17 million files uploaded into Amazon S3, taking up more than 1.5 petabytes of space.

AdRoll didn’t have to build a global network of data centers to power its product, thanks in large part to using DynamoDB.

“We haven’t spent a single engineer to operate this system,” Volonghi says. “It’s actually technically fun to operate a database at this massive scale.”

Not every company is going to have the needs of Amazon.com’s ecommerce site or AdRoll’s real-time bidding platform. But many are struggling to achieve greater scale without major capital investments. The cloud makes that possible, and DynamoDB is a prime example.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

The interim CEO would have to leave his post at Square to take over at Twitter

A week and a half after Dick Costolo announced that he would be stepping down from the CEO role at Twitter, the company’s board of directors has sent a shot across the bow of one of the expected front-runner candidates to take the social network’s top job.

The social micro-blogging company’s search committee will only consider CEO candidates “who are in a position to make a full-time commitment to Twitter,” the board said.That would seem to rule out Jack Dorsey, the company’s co-founder who currently works as the CEO of Square and will be filling in as interim CEO of Twitter.

Dorsey has said that he plans to remain at the helm of the payment processing company he co-founded, but hasn’t explicitly ruled out a bid for a permanent berth in Twitter’s top job. Now the Twitter board has made it clear that he would have to depart Square if he wants to run Twitter. That’s a rough proposition for Dorsey, especially since Square is reportedly planning to go public this year.

As for the overall search process, Twitter’s search committee has contracted with executive search firm Spencer Stuart to evaluate internal and external candidates for the job. The board hasn’t set a firm time frame for its hiring of a new CEO, saying that there’s a “sense of urgency” to the process but that it will take its time to find the right person for the job.

Whoever steps into the top spot at Twitter will have to contend with increased pressure on the company from Wall Street. Investors have been disappointed by Twitter’s revenue and user growth in recent quarters.


 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Machine intelligence can be used to police networks and fill gaps where the available resources and capabilities of human intelligence are clearly falling short

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

Humans are clearly incapable of monitoring and identifying every threat on today’s vast and complex networks using traditional security tools. We need to enhance human capabilities by augmenting them with machine intelligence. Mixing man and machine – in some ways, similar to what OmniCorp did with RoboCop – can heighten our ability to identify and stop a threat before it’s too late.

The “dumb” tools that organizations rely on today are simply ineffective. There are two consistent, yet still surprising things that make this ineptitude fairly apparent. The first is the amount of time hackers have free reign within a system before being detected: eight months at Premera and P.F. Chang’s, six months at Nieman Marcus, five months at Home Depot, and the list goes on.

The second surprise is the response. Everyone usually looks backwards, trying to figure out how the external actors got in. Finding the proverbial leak and plugging it is obviously important, but this approach only treats a symptom instead of curing the disease.

The disease, in this case, is the growing faction of hackers that are getting so good at what they do they can infiltrate a network and roam around freely, accessing more files and data than even most internal employees have access to. If it took months for Premera, Sony, Target and others to detect these bad actors in their networks and begin to patch the holes that let them in, how can they be sure that another group didn’t find another hole? How do they know other groups aren’t pilfering data right now? Today, they can’t know for sure.

The typical response
Until recently, companies have really only had one option as a response to rising threats, a response that most organizations still employ. They re-harden systems, ratchet-up firewall and IDS/IPS rules and thresholds, and put stricter web proxy and VPN policies in place. But by doing this they drown their incident response teams in alerts.

Tightening policies and adding to the number of scenarios that will raise a red flag just makes the job more difficult for security teams that are already stretched thin. This causes thousands of false positives every day, making it physically impossible to investigate every one. As recent high profile attacks have proven, the deluge of alerts is helping malicious activity slip through the cracks because, even when it is “caught,” nothing is being done about it.

In addition, clamping down on security rules and procedures just wastes everyone’s time. By design, tighter policies will restrict access to data, and in many cases, that data is what employees need to do their jobs well. Employees and departments will start asking for the tools and information they need, wasting precious time for them and the IT/security teams that have to vet every request.

Putting RoboCop on the case
Machine intelligence can be used to police massive networks and help fill gaps where the available resources and capabilities of human intelligence are clearly falling short. It’s a bit like letting RoboCop police the streets, but in this case the main armament is statistical algorithms. More specifically, statistics can be used to identify abnormal and potentially malicious activity as it occurs.

According to Dave Shackleford, an analyst at SANS Institute and author of its 2014 Analytics and Intelligence Survey, “one of the biggest challenges security organizations face is lack of visibility into what’s happening in the environment.” The survey of 350 IT professionals asked why they have difficulty identifying threats and a top response was their inability to understand and baseline “normal behavior.” It’s something that humans just can’t do in complex environments, and since we’re not able to distinguish normal behavior, we can’t see abnormal behavior.

Instead of relying on humans looking at graphs on big screen monitors, or human-defined rules and thresholds to raise flags, machines can learn what normal behavior looks like, adjusting in real time and becoming smarter as they processes more information. What’s more, machines possess the speed required to process the massive amount of information that networks create, and they can do it in near-real time. Some networks process terabytes of data every second, while humans, on the other hand, can process no more than 60 bits per second.

Putting aside the need for speed and capacity, a larger issue with the traditional way of monitoring for security issues is rules are dumb. That’s not just name calling either, they’re literally dumb. Humans set rules that tell the machine how to act and what to do – the speed and processing capacity is irrelevant. While rule-based monitoring systems can be very complex, they’re still built on a basic “if this, then do that” formula. Enabling machines to think for themselves and feed better data and insight to the humans that rely on them is what will really improve security.

It’s almost absurd to not have a layer of security that thinks for itself. Imagine in the physical world if someone was crossing the border every day with a wheelbarrow full of dirt and the customs agents, being diligent at their jobs and following the rules, were sifting through that dirt day after day, never finding what they thought they were looking for. Even though that same person repeatedly crosses the border with a wheelbarrow full of dirt, no one ever thinks to look at the wheelbarrow. If they had, they would have quickly learned he’s been stealing wheelbarrows the whole time!

Just because no one told the customs agents to look for stolen wheelbarrows doesn’t make it OK, but as they say, hindsight is 20/20. In the digital world, we don’t have to rely on hindsight anymore, especially now that we have the power to put machine intelligence to work and recognize anomalies that could be occurring right under our noses. In order for cyber-security to be effective today, it needs at least a basic level of intelligence. Machines that learn on their own and detect anomalous activity can find the “wheelbarrow thief” that might be slowly syphoning data, even if you don’t specifically know that you’re looking for him.

Anomaly detection is among the first technology categories where machine learning is being put to use to enhance network and application security. It’s a form of advanced security analytics, which is a term that’s used quite frequently. However, there are a few requirements this type of technology must meet to truly be considered “advanced.” It must be easily deployed to operate continuously, against a broad array of data types and sources, and at huge data scales to produce high fidelity insights so as not to further add to the alert blindness already confronting security teams.

Leading analysts agree that machine learning will soon be a “need to have” in order to protect a network. In a Nov. 2014 Gartner report titled, “Add New Performance Metrics to Manage Machine-Learning-Enabled Systems,” analyst Will Cappelli directly states, “machine learning functionality will, over the next five years, gradually become pervasive and, in the process, fundamentally modify system performance and cost characteristics.”

While machine learning is certainly not a silver bullet that will solve all security challenges, there’s no doubt it will provide better information to help humans make better decisions. Let’s stop asking people to do the impossible and let machine intelligence step in to help get the job done.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

While businesses plan to increase IT hiring in 2015, it may be easier said than done, especially when it comes to hiring software developers.

The good news is that more businesses are planning to boost their IT hiring in 2015. The bad news? Many are struggling to find talent to fill vacant or newly created roles, especially for software developers and data analytics pros, according to a recent survey from HackerRank, which matches IT talent with hiring companies using custom coding challenges.

In a survey of current and potential customers performed in March, HackerRank asked 1,300 hiring managers about their hiring outlook for the coming year, their hiring practices and the challenges they faced in filling open positions. Of those who responded to the survey, 76 percent say they planned to fill more technical roles in the remainder of 2015 than they did in 2014.
Theory vs. practice

But intending to fill open positions and actually filling them are two different things, as the survey results show. While 94 percent of respondents to the survey say they’re hiring Java developers and 68 percent are hiring for user interface/user experience (UI/UX) designers, 41 percent also claim these roles are difficult to fill.

“That number was the most surprising when we looked at the results. We knew it was going to be a significant percentage, but it seems customers are really struggling to fill these software development roles,” says Vivek Ravinskar, co-founder and CEO of HackerRank.
Java continues to dominate

The survey also revealed that Java continues to be the dominant language sought by hiring managers and recruiters. Of the survey respondents, 69 percent say Java is the most important skill candidates can have.

“Many of our customers are involved in Web-based business or in developing apps. And Java is instrumental for both of these business pursuits — we absolutely expected to hear this from the survey, and we weren’t surprised,” says Ravinskar.
What makes these positions so difficult to fill?

Part of the problem may lie with candidates’ perceptions of a company’s brand, says Tejal Parekh, HackerRank’s vice president of marketing. “We work with a lot of customers in areas that aren’t typically thought of as technology hotspots. For instance, in the finance sector we have customers facing a dearth of IT talent; they’re all innovative companies with a strong technology focus, but candidates don’t see them as such. They want to go to Facebook or Amazon,” says Parekh.

Another challenge lies with the expectations hiring companies have of their candidate pool, says Ravinskar. “There’s also an unconscious bias issue with customers who sometimes limit themselves by not looking outside the traditional IT talent pool. They’re only considering white, male talent from specific schools or specific geographic areas,” says Ravinskar.
Up the ante

As demand for IT talent increases, so do IT salaries. According to the survey, 67 percent of hiring managers say that salaries for technical positions have increased between 2014 and 2015 while 32 percent say they have stayed the same. Overall, HackerRank’s survey highlights the great opportunities available for software development talent and for the companies vying to hire them.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Microsoft released eight security bulletins, two rated critical, but four address remote code execution vulnerabilities that an attacker could exploit to take control of a victim’s machine.

For June 2015 “Update Tuesday,” Microsoft released 8 security bulletins; only two the security updates are rated critical for resolving remote code execution (RCE) flaws, but two patches rated important also address RCE vulnerabilities.

Rated as Critical
MS15-056 is a cumulative security update for Internet Explorer, which fixes 24 vulnerabilities. Qualys CTO Wolfgang Kandek added, “This includes 20 critical flaws that can lead to RCE which an attacker would trigger through a malicious webpage. All versions of IE and Windows are affected. Patch this first and fast.”

Microsoft said the patch resolves vulnerabilities by “preventing browser histories from being accessed by a malicious site; adding additional permission validations to Internet Explorer; and modifying how Internet Explorer handles objects in memory.”

MS15-057 fixes a hole in Windows that could allow remote code execution if Windows Media Player opens specially crafted media content that is hosted on a malicious site. An attacker could exploit this vulnerability to “take complete control of an affected system remotely.”

Rated as Important
MS15-058 is not listed other than a placeholder, but MS15-059 and MS15-060 both address remote code execution flaws.

MS15-059 addresses RCE vulnerabilities in Microsoft Office. Although it’s rated important for Microsoft Office 2010 and 2013, Microsoft Office Compatibility Pack Service Pack 3 and Microsoft Office 2013 RT, Kandek said it should be your second patching priority. If an attacker could convince a user to open a malicious file with Word or any other Office tool, then he or she could take control of a user’s machine. “The fact that one can achieve RCE, plus the ease with which an attacker can convince the target to open an attached file through social engineering, make this a high-risk vulnerability.”

MS15-060 resolves a vulnerability in Microsoft Windows “common controls.” The vulnerability “could allow remote code execution if a user clicks a specially crafted link, or a link to specially crafted content, and then invokes F12 Developer Tools in Internet Explorer.” Kandek explained, “MS15-060 is a vulnerability in the common controls of Windows which is accessible through Internet Explorer Developer Menu. An attack needs to trigger this menu to be successful. Turning off developer mode in Internet Explorer (why is it on by default?) is a listed work-around and is a good defense in depth measure that you should take a look at for your machines.”

The last four patches Microsoft issued address elevation of privilege vulnerabilities.

MS15-061 resolves bugs in Microsoft Windows kernel-mode drivers. “The most severe of these vulnerabilities could allow elevation of privilege if an attacker logs on to the system and runs a specially crafted application. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.”

MS15-062 addresses a security hole in Microsoft Active Directory Federal Services. Microsoft said, “The vulnerability could allow elevation of privilege if an attacker submits a specially crafted URL to a target site. Due to the vulnerability, in specific situations specially crafted script is not properly sanitized, which subsequently could lead to an attacker-supplied script being run in the security context of a user who views the malicious content. For cross-site scripting attacks, this vulnerability requires that a user be visiting a compromised site for any malicious action to occur.”

MS15-063 is another patch for Windows kernel that could allow EoP “if an attacker places a malicious .dll file in a local directory on the machine or on a network share. An attacker would then have to wait for a user to run a program that can load a malicious .dll file, resulting in elevation of privilege. However, in all cases an attacker would have no way to force a user to visit such a network share or website.”

MS15-064 resolves vulnerabilities in Microsoft Exchange Server by “modifying how Exchange web applications manage same-origin policy; modifying how Exchange web applications manage user session authentication; and correcting how Exchange web applications sanitize HTML strings.”

It would be wise to patch Adobe Flash while you are at it as four of 13 vulnerabilities patched are rated critical.

Happy patching!


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Google developer advocate Colt McAnlis said that Android apps, almost across the board, are not architected correctly for the best networking performance, during a talk he gave Friday at Google’s I/O developer conference in San Francisco.

“Networking performance is one of the most important things that every one of your apps does wrong,” he told the crowd.

+ ALSO ON NETWORK WORLD: Enterprise tech a no-show at Google I/O + Google hypes Android M, Android Pay, Google Photos at I/O 2015 +

By structuring the way apps access the network inefficiently, McAnlis said, developers are imposing needless costs in terms of performance and battery life – costs for which their users are on the hook.

“Bad networking costs your customers money,” he said. “Every rogue request you make, every out-of-sync packet every two-bit image you request, the user has to pay for. Imagine if I went out and told them that.”

The key to fixing the problem? Use the radio less, and don’t move so much data around, McAnlis said.

One way to do this is batching, he said – architecting an app such that lower-priority data is sent when a device’s networking hardware has been activated by something else, minimizing the amount of time and energy used by the radio.

Pre-fetching data is another important technique for smoothing out network usage by Android apps, he said.

“If you can somehow sense that you’re going to make six or seven requests in the future, don’t wait for the device to go to sleep and then wake it up again – take advantage of the fact that the chip is awake right now, and make the requests right now,” McAnlis said.

He also urged developers to use Google Cloud Messaging, rather than relying on server polling for updates.

“Polling the server is horrible. … It is a waste of the user’s time,” McAnlis said. “Think about this: Every time you poll the server and it comes back with a null packet, telling you that there’s no new data, the user’s paying for that.”


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

6 IT leaders share tips to drive collaboration

Collaboration tools are destined to fail when IT leaders look to solve problems that don’t exist. Here’s how CIOs and IT managers ensure their collaborative platform efforts aren’t futile.

Driving enterprise collaboration is a tall order for CIOs and other IT leaders. The challenges don’t end after a new tool is implemented. If not done the right way for the right reasons, the headaches of deploying a collaboration platform can fester well beyond the technical hurdles.

The first thing to remember is that collaboration tool adoption in the enterprise is a journey, John Abel, senior vice president of IT at Hitachi Data Systems, told CIO.com in an email interview.

“It has to be appealing or provide a value or information where employees find it more difficult to access on other platforms,” Abel says.

Collaboration projects are almost destined to get bogged down when IT leaders pursue solutions to problems that don’t exist. So how can CIOs ensure success?

Empower employees and respect their needs

IT leaders should get insights into the tools employees already use and make sure they are personally invested in the selection process, Brian Lozada, director and CISO at the job placement firm Abacus Group, told CIO.com.

When employees are empowered, they are more likely to use and generate excitement for new collaboration tools internally, Lozada says. Employees ultimately contribute to and determine the success of most collaboration efforts.

It’s also important to acknowledge what success in enterprise collaboration looks like. This is particularly important when employees use collaboration tools to get work done more effectively due to collaboration software, says NetScout’s CIO and Senior Vice President of Services Ken Boyd. “Freedom and flexibility are paramount to how most users want to work today.”

The less training required the better because tools that are more intuitive tend to deliver greater benefits for the organization and user.

“Faster employee engagement of a collaboration tool comes by addressing a pain point in a communication or productivity area, and showing how the tool, with a simple click, provides better or instant access to colleagues and information, shaves seconds or minutes off schedules, or provides greater visibility into a team project,” Boyd says.

Presenting the business benefit of integrating a faster and more widespread adoption of collaboration tools can be a strong motivator for many department heads as well, Boyd says.

User experience is a critical component of any tool and its chances for success, according to Shamlan Siddiqi, vice president of architecture and application development at the systems integrator NTT Data.“Users want something they can quickly deploy with the same immersive and collaborative experience that they get when using collaboration tools at home,” he says.

Gamification is a leading trigger for adoption

“Employee engagement techniques such as gamification and game-design principles help create incentives for users to engage and embrace tools more effectively,” says Siddiqi, adding that NTT Data has seen significant increases in collaborative tool engagement internally through the introduction of gamification.

Chris McKewon, founder and CEO of the managed services provider Xceptional Networks, agrees that gamification is the best way to encourage employees to use new tools.

“Gamification provides incentives for embracing the technology and to demonstrate how much more real work they can get done with these tools by selling the concepts in with benefits, not on features,” McKewon told CIO.com in an email interview.
Collaboration and the art of seduction

Ruven Gotz, director of collaboration services at the IT solutions vendor Avanade, says his team drives adoption by seduction.

“Our goal is to create collaboration experiences that users clearly recognize as the superior means to achieve the results they seek,” Gotz says.

When CIOs and IT leaders get enterprise collaboration right, there’s no need to drive adoption, Gotz says, because “employees recognize that we have provided a better working flow and will abandon other alternatives.”


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

QUESTION 1
You are using SQL Server Management Studio (SSMS) to configure the backup for ABC
Solutions. You need to meet the technical requirements.
Which two backup options should you configure? (Choose two).

A. Enable encryption of the backup file.
B. Enable compression of the backup file.
C. Disable encryption of the backup file.
D. Disable compression of the backup file.

Answer: B,C

Explanation:


QUESTION 2
You need to convert the Production, Sales, Customers and Human Resources databases to
tabular BI Semantic Models (BISMs).
Which two of the following actions should you perform? (Choose two)

A. You should select the tabular mode option when upgrading the databases using the Database
Synchronization Wizard.
B. You should select the tabular mode destination option when copying the databases using SQL
Server Integration Services (SSIS).
C. You should select the tabular mode option during the installation of SQL Server Analysis
Services.
D. You should redevelop the projects and deploy them using SQL Server Data Tools (SSDT).

Answer: A,D

Explanation:


QUESTION 3
ABC users report that they are not receiving report subscriptions from SQLReporting01.
You confirm that the report subscriptions are not being delivered.
Which of the following actions should you perform to resolve the issue?

A. You should run the SQL Server 2012 Setup executable on SQLReporting01 to generate a
configuration file.
B. You should reset the password of the SQL Server Service account.
C. You should manually fail over the SSAS cluster.
D. You should restore the ReportServer database on SQLReporting01.

Answer: C

Explanation:


QUESTION 4
ABC users report that they are not receiving report subscriptions from SQLReporting01.
You confirm that the report subscriptions are not being delivered.
Which of the following actions should you perform to resolve the issue?

A. You should run the SQL Server 2012 Upgrade Wizard to upgrade the active node of the
SSAS cluster.
B. You should start the SQL Server Agent on the active node of the SSAS cluster.
C. You should restore the ReportServerTempDB database on SQLReporting01.
D. You should start the SQL Server Agent on SQLReporting01.

Answer: D

Explanation:


QUESTION 5
You need to make the SSAS databases available on SSAS2012 to enable testing from client
applications. Your solution must minimize server downtime and maximize database
availability.
What should you do?

A. You should detach the databases from the SSAS cluster by using SQL Server Management
Studio (SSMS) then attach the databases on SSAS2012.
B. You should copy the database files from the SSAS cluster to SSAS2012.
C. You should export the databases from the SSAS cluster by using SQL Server Management
Studio (SSMS) then import the databases on SSAS2012.
D. You should restore a copy of the databases from the most recent backup.

Answer: D

Explanation:


MCTS Training, MCITP Trainnig

Best Microsoft MCSA: SQL Server 2012 Certification, Microsoft 70-467 Training at certkingdom.com

Open-source software projects are often well intended, but security can take a back seat to making the code work.

OpenDaylight, the multivendor software-defined networking (SDN) project, learned that the hard way last August after a critical vulnerability was found in its platform.

It took until December for the flaw, called Netdump, to get patched, a gap in time exacerbated by the fact that the project didn’t yet have a dedicated security team. After he tried and failed to get in touch with OpenDaylight, the finder of the vulnerability, Gregory Pickett, posted it on Bugtraq, a popular mailing list for security flaws.

INSIDER: 5 ways to prepare for Internet of Things security threats

Although OpenDaylight is still in the early stages and generally isn’t used in production environments, the situation highlighted the need to put a security response process in place.

“It’s actually a surprisingly common problem with open-source projects,” said David Jorm, a product security engineer with IIX who formed OpenDaylight’s security response team. “If there are not people with a strong security background, it’s very common that they won’t think about providing a mechanism for reporting vulnerabilities.”

The OpenDaylight project was launched in April 2013 and is supported by vendors including Cisco Systems, IBM, Microsoft, Ericsson and VMware. The aim is to develop networking products that remove some of the manual fiddling that administrators still need to do with controllers and switches.

Having a common foundation for those products would help with compatibility, as enterprises often use a variety of networking equipment from many vendors.

Security will be an integral component of SDN, since a flaw could have devastating consequences. By compromising an SDN controller—a critical component that tells switches how data packets should be forwarded—an attacker would have control over the entire network, Jorm said.

“It’s a really high value target to go after,” Jorm said.
The Netdump flaw kicked OpenDaylight into action, and now there is a security team in place from a range of vendors who represent different projects within OpenDaylight, Jorm said.

OpenDaylight’s technical steering committee also recently approved a detailed security response process modeled on one used by the OpenStack Foundation, Jorm said.

If a vulnerability is reported privately and not publicly disclosed, some OpenDaylight stakeholders—even those who do not have a member on the security team—will get pre-notification so they have a chance to develop a patch, Jorm said. That kind of disclosure is rare, though it is becoming more common with open-source projects.

The idea is that once a flaw is disclosed, vendors will generally be on the same page and release a patch around the same time, Jorm said.

OpenDaylight’s security response process is “quite well ironed out now,” Jorm said.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

It’s not often that a great product becomes even greater …

The Raspberry Pi 2 Model B, available from Element 14, was recently released and it’s a serious step up from its predecessors. Before we dive in to what makes it an outstanding product, the Raspberry Pi family tree going from oldest to newest, is as follows:

Raspberry Pi B
Raspberry Pi A
Raspberry Pi B+
Raspberry Pi A+
Raspberry Pi 2 Model B

The + models were upgrades of the previous board versions and the RPi2B is the Raspberry Pi B+’s direct descendent with added muscle. So, what makes the Raspberry Pi 2 Model B great?

The Raspberry Pi 2 Model B has a 40 pin GPIO header as did the A+ and B+ and the first 26 pins are identical to the A and B models making the new board a drop-in upgrade for most projects. The new board also supports all of the expansion (HAT) boards used by the previous models.
The Raspberry Pi 2 Model B has an identical board layout and footprint as the B+, so all cases and 3rd party add-on boards designed for the B+ will be fully compatible.
In common with the B+ the Raspberry Pi 2 Model B has 4 USB 2.0 ports (compared to 2 USB ports on the A, A+, and B models) that can provide up to 1.2 Amps for the more power hungry USB devices (this feature does, however, require a 2 Amp power supply).

The Raspberry Pi 2 Model B video output is via a full-sized HDMI (rev 1.3 & 1.4) port with 14 HDMI resolutions from 640×350 to 1920×1200 with digital audio (there’s also composite video output; see below).

The A, A+, and B models use linear power regulators while the B+ and the Raspberry Pi 2 Model B have switching regulators which reduce power consumption by between 0.5W and 1W.
In common with the B+, the Raspberry Pi 2 Model B’s audio circuit has a dedicated low-noise power supply for better audio quality and analog stereo audio is output on the four pole 3.5mm jack it shares with composite video (PAL and NTSC) output.

The previous top of the line B+ model had 512MB of RAM while the new Raspberry Pi 2 Model B now has 1GB making it possible to run larger applications and more complex operating system environments.

The previous Raspberry Pi models used a 700 MHz single-core ARM1176JZF-S processor while the Raspberry Pi 2 Model B has upped the ante to a 900 MHz quad-core ARM Cortex-A7, a considerably faster CPU. The result is performance that’s roughly 6 times better! The advantages of upgrading existing projects to the Raspberry Pi 2 Model B are huge.

Not only will the Raspberry Pi 2 Model B run all of the operating systems its predecessors ran, it will also be able to run Microsoft’s Windows 10 … for free! Yep, Microsoft has decided that it wants to be part of the Raspberry Pi world and for a good reason; a huge number of kids will have their first experience of computing on RPi boards and what better way to gain new acolytes?

This may be the best improvement of the lot: For the added compute power, increased RAM, and drop-in compatibility there’s no extra cost! The Raspberry Pi 2 Model B is priced at $35, the same as its predecessor!

The Raspberry Pi 2 Model B is one of the best (quite possibly, *the* best) single board computers available and, given the huge popularity of the Raspberry Pi family (now with more than 500,000 Raspberry Pi 2 Model B’s sold and around 5 million Pi’s in total if you include all models), it’s one of the best understood and supported products of its kind. Whether it’s for hobbyist, educational, or commercial use, the Raspberry Pi 2 Model B is an outstanding product.


 

 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

AIIM group finds Microsoft’s Yammer social tool slow to catch on as well, though IT shops hopeful about product roadmap

Many SharePoint installations at enterprises have been doomed largely due to senior management failing to really get behind the Microsoft collaboration technology, according to a new study by AIIM, which bills itself as “the Global Community of IT Professionals.”

The AIIM (Association for Information and Image Management) Web-based survey of 409 member organizations found that nearly two-thirds described their SharePoint projects as either stalled (26%) or not meeting original expectations (37%).

RELATED: 12 Key Strategies for Unlocking the Secrets of SharePoint User Adoption
The associated Yammer social business tool has also been slow to catch on, with only about 1 in 5 organizations using it, and only 10% of them using it regularly and on a widespread basis (Disclosure: I use it a bit here and there at IDG Enterprise!). Many organizations aren’t specifically biased against Yammer though — 4 in 10 say they don’t use any such tool.
Microsoft yammer iPad app Microsoft

Reasons cited for tepid uptake of SharePoint and Yammer include inadequate user training and investment.

“Enterprises have it, but workers are simply not engaging with SharePoint in a committed way,” said Doug Miles, AIIM director of market intelligence, in a statement. “It remains an investment priority however, and the C-suite must get behind it more fully than they are currently if they are to realize a return on that investment.”

Miles says it shouldn’t be up to IT departments to push SharePoint within organizations, but rather, business lines should take the lead.

The study showed that 75% of respondents still feel strongly about making SharePoint work at their organizations. The cloud-based Office 365 version has shown good signs of life, and 43% of respondents indicated faith in Microsoft’s product roadmap for its collaboration tools, according to the AIIM report.

Half of respondents expressed concern about a lack of focus by Microsoft on the on-premise version of SharePoint. That’s an issue that market watcher Gartner stressed last year could make SharePoint a lot less useful for organizations counting on it for customer-facing and content marketing applications.

You can get a free full version of the AIIM study, ‘Connecting and Optimizing SharePoint’, by filling out a registration form.

The research was underwritten in part by ASG, AvePoint, Colligo, Concept Searching, Collabware, EMC, Gimmal Group, K2 and OpenText. While Microsoft is a member of AIIM’s Executive Leadership Council, it is not listed as one of the funders for this study.

A Microsoft representative is looking into our request for comment on the report.


 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

The best office apps for Android

Written by admin
January 19th, 2015

Which office package provides the best productivity experience on Android? We put the leading contenders to the test

Getting serious about mobile productivity
We live in an increasingly mobile world — and while many of us spend our days working on traditional desktops or laptops, we also frequently find ourselves on the road and relying on tablets or smartphones to stay connected and get work done.

Where do you turn when it’s time for serious productivity on an Android device? The Google Play Store boasts several popular office suite options; at a glance, they all look fairly comparable. But don’t be fooled: All Android office apps are not created equal.

I spent some time testing the five most noteworthy Android office suites to see where they shine and where they fall short. I looked at how each app handles word processing, spreadsheet editing, and presentation editing — both in terms of the features each app offers and regarding user interface and experience. I took both tablet and smartphone performance into consideration.

Click through for a detailed analysis; by the time you’re done, you’ll have a crystal-clear idea of which Android office suite is right for you.

Best Android word processor: OfficeSuite 8 Premium
Mobile Systems’ OfficeSuite 8 Premium offers desktop-class word processing that no competitor comes close to matching. The UI is clean, easy to use, and intelligently designed to expand to a tablet-optimized setup. Its robust set of editing tools is organized into easily accessible on-screen tabs on a tablet (and condensed into drop-down menus on a phone). OfficeSuite 8 Premium provides practically everything you need, from basic formatting to advanced table creation and manipulation utilities. You can insert images, shapes, and freehand drawings; add and view comments; track, accept, and reject changes; spell-check; and calculate word counts. There’s even a native PDF markup utility, PDF export, and the ability to print to a cloud-connected printer.

OfficeSuite 8 Premium works with locally stored Word-formatted files and connects directly to cloud accounts, enabling you to view and edit documents without having to download or manually sync your work.

Purchasing OfficeSuite 8 Premium is another matter. Search the Play Store, and you’ll find three offerings from Mobile Systems: a free app, OfficeSuite 8 + PDF Converter; a $14.99 app, OfficeSuite 8 Pro + PDF; and another free app, OfficeSuite 8 Pro (Trial). The company also offers a dizzying array of add-ons that range in price from free to $20.

The version reviewed here — and the one most business users will want — is accessible only by downloading the free OfficeSuite 8 + PDF Converter app and following the link on the app’s main screen to upgrade to Premium, which requires a one-time $19.99 in-app purchase that unlocks all possible options, giving you the most fully featured setup, no further purchases required.

App: OfficeSuite 8 Premium
Price: $19.99 (via in-app upgrade)
Developer: Mobile Systems

Runner-up Android word processor: Google Docs
Google’s mobile editing suite has come a long way, thanks largely to its integration of Quickoffice, which Google acquired in 2012. With the help of Quickoffice technology, the Google Docs word processor has matured into a usable tool for folks with basic editing needs.

Docs is nowhere near as robust as OfficeSuite 8 Premium, but if you rely mainly on Google’s cloud storage or want to do simple on-the-go writing or editing, it’s light, free, and decent enough to get the job done, whether you’re targeting locally stored files saved in standard Word formats or files stored within Docs in Google’s proprietary format.

Docs’ clean, minimalist interface follows Google’s Material Design motif, making it pleasant to use. It offers basic formatting (fonts, lists, alignment) and tools for inserting and manipulating images and tables. The app’s spell-check function is limited to identifying misspelled words by underlining them within the text; there’s no way to perform a manual search or to receive proper spelling suggestions.

Google Docs’ greatest strength is in its cross-device synchronization and collaboration potential: With cloud-based documents, the app syncs changes instantly and automatically as you work. You can work on a document simultaneously from your phone, tablet, or computer, and the edits and additions show up simultaneously on all devices. You can also invite other users into the real-time editing process and keep in contact with them via in-document commenting.

App: Google Docs
Price: Free
Developer: Google

The rest of the Android word processors
Infraware’s Polaris Office is a decent word processor held back by pesky UI quirks and an off-putting sales approach. The app was clearly created for smartphones; as a result, it delivers a subpar tablet experience with basic commands tucked away and features like table creation stuffed into short windows that require awkward scrolling to see all the content. Polaris also requires you to create an account before using the app and pushes its $40-a-year membership fee to gain access to a few extras and the company’s superfluous cloud storage service.

Kingsoft’s free WPS Mobile Office (formerly Kingsoft Office) has a decent UI but is slow to open files and makes it difficult to find documents stored on your device. I also found it somewhat buggy and inconsistent: When attempting to edit existing Word (.docx) documents, for instance, I often couldn’t get the virtual keyboard to load, rendering the app useless. (I experienced this on multiple devices, so it wasn’t specific to any one phone or tablet.)

DataViz’s Docs to Go (formerly Documents to Go) has a dated, inefficient UI, with basic commands buried behind layers of pop-up menus and a design reminiscent of Android’s 2010 Gingerbread era. While it offers a reasonable set of features, it lacks functionality like image insertion and spell check; also, it’s difficult to find and open locally stored documents. It also requires a $14.99 Premium Key to remove ads peppered throughout the program and to gain access to any cloud storage capabilities.

Best Android spreadsheet editor: OfficeSuite 8 Premium
With its outstanding user interface and comprehensive range of features, OfficeSuite 8 Premium stands out above the rest in the realm of spreadsheets. Like its word processor, the app’s spreadsheet editor is clean, easy to use, and fully adaptive to the tablet form.

It’s fully featured, too, with all the mathematical functions you’d expect organized into intuitive categories and easily accessible via a prominent dedicated on-screen button. Other commands are broken down into standard top-of-screen tabs on a tablet or are condensed into a drop-down menu on a smartphone.

With advanced formatting options to multiple sheet support, wireless printing, and PDF exporting, there’s little lacking in this well-rounded setup. And as mentioned above, OfficeSuite offers a large list of cloud storage options that you can connect with to keep your work synced across multiple devices.

App: OfficeSuite 8 Premium
Price: $19.99 (via in-app upgrade)
Developer: Mobile Systems

Runner-up Android spreadsheet editor: Polaris Office
Polaris Office still suffers from a subpar, non-tablet-optimized UI, but after OfficeSuite Premium 8, it’s the next best option.

Design aside, the Polaris Office spreadsheet editor offers a commendable set of features, including support for multiple sheets and easy access to a full array of mathematical functions. The touch targets are bewilderingly small, which is frustrating for a device that’s controlled by fingers, but most options you’d want are all there, even if not ideally presented or easily accessible.

Be warned that the editor has a quirk: You sometimes have to switch from “view” mode to “edit” mode before you can make changes to a sheet — not entirely apparent when you first open a file. Be ready to be annoyed by the required account creation and subsequent attempts to get you to sign up for an unnecessary paid annual subscription.

Quite honestly, the free version of OfficeSuite would be a preferable alternative for most users; despite its feature limitations compared to the app’s Premium configuration, it still provides a better overall experience than Polaris or any of its competitors. If that doesn’t fit the bill for you, Polaris Office is a distant second that might do the trick.

App: Polaris Office
Price: Free (with optional annual subscription)
Developer: Infraware

The rest of the Android spreadsheet editors
Google Sheets (part of the Google Docs package) lacks too many features to be usable for anything beyond the most basic viewing or tweaking of a simple spreadsheet. The app has a Function command for standard calculations, but it’s hidden and appears in the lower-right corner of the screen inconsistently, rendering it useless most of the time. You can’t sort cells or insert images, and its editing interface adapts poorly to tablets. Its only saving grace is integrated cloud syncing and multiuser/multidevice collaboration.

WPS Mobile Office is similarly mediocre: It’s slow to open files, and its Function command — a vital component of spreadsheet work — is hidden in the middle of an “Insert” menu. On the plus side, it has an impressive range of features and doesn’t seem to suffer from the keyboard bug present in its word-processing counterpart.

Docs to Go is barely in the race. Its embarrassingly dated UI makes no attempt to take advantage of the tablet form. Every command is buried behind multiple layers of pop-up menus, all of which are accessible only via an awkward hamburger icon at the top-right of the screen. The app’s Function command doesn’t even offer descriptions of what the options do — only Excel-style lingo like “ABS,” “ACOS,” and “COUNTIF.” During my testing, the app failed to open some perfectly valid Excel (.xlsx) files I used across all the programs as samples.

Best Android presentation editor: OfficeSuite 8 Premium
OfficeSuite 8 Premium’s intuitive, tablet-optimized UI makes it easy to edit and create presentations on the go. Yet again, it’s the best-in-class contender by a long shot. (Are you starting to sense a pattern here?)

OfficeSuite offers loads of options for making slides look professional, including a variety of templates and a huge selection of slick transitions. It has tools for inserting images, text boxes, shapes, and freehand drawings into your slides, and it supports presenter notes and offers utilities for quickly duplicating or reordering slides. You can export to PDF and print to a cloud-connected printer easily.

If you’re serious about mobile presentation editing, OfficeSuite 8 Premium is the only app you should even consider.

App: OfficeSuite 8 Premium
Price: $19.99 (via in-app upgrade)
Developer: Mobile Systems

Runner-up Android presentation editor: Polaris Office
If it weren’t for the existence of OfficeSuite, Polaris’s presentation editor would look pretty good. The app offers basic templates to get your slides started; they’re far less polished and professional-looking than OfficeSuite’s, but they get the job done.

Refreshingly, the app makes an effort to take advantage of the tablet form in this domain, providing a split view with a rundown of your slides on the left and the current slide in a large panel alongside it. (On a phone, that rundown panel moves to the bottom of the screen and becomes collapsible.)

With Polaris, you can insert images, shapes, tablets, charts, symbols, and text boxes into slides, and drag-and-drop to reorder any slides you’ve created. It offers no way to duplicate an existing slide, however, nor does it sport any transitions to give your presentation pizazz. It also lacks presenter notes.

Most people would get a better overall experience from even the free version of OfficeSuite, but if you want a second option, Polaris is the one.

App: Polaris Office
Price: Free (with optional annual subscription)
Developer: Infraware

The rest of the Android presentation editors
Google Slides (part of the Google Docs package) is bare-bones: You can do basic text editing and formatting, and that’s about it. The app does offer predefined arrangements for text box placement — and includes the ability to view and edit presenter notes — but with no ability to insert images or slide backgrounds and no templates or transitions, it’s impossible to create a presentation that looks like it came from this decade.

WPS Mobile Office is similarly basic, though with a few extra flourishes: The app allows you to insert images, shapes, tables, and charts in addition to plain ol’ text. Like Google Slides, it lacks templates, transitions, and any other advanced tools and isn’t going to create anything that looks polished or professional.

Last but not least, Docs to Go — as you’re probably expecting by this point — borders on unusable. The app’s UI is dated and clunky, and the editor offers practically no tools for modern presentation creation. You can’t insert images or transitions; even basic formatting tools are sparse. Don’t waste your time looking at this app.

Putting it all together
The results are clear: OfficeSuite 8 Premium is by far the best overall office suite on Android today. From its excellent UI to its commendable feature set, the app is in a league of its own. At $19.99, the full version isn’t cheap, but you get what you pay for, which is the best mobile office experience with next to no compromises. The less fully featured OfficeSuite 8 Pro ($9.99) is a worthy one-step-down alternative, as is the basic, ad-supported free version of the main OfficeSuite app.

If basic on-the-go word processing is all you require — and you work primarily with Google services — Google’s free Google Docs may be good enough. The spreadsheet and presentation editors are far less functional, but depending on your needs, they might suffice.

Polaris Office is adequate but unremarkable. The basic program is free, so if you want more functionality than Google’s suite but don’t want to pay for OfficeSuite — or use OfficeSuite’s lower-priced or free offerings — it could be worth considering. But you’ll get a significantly less powerful program and less pleasant overall user experience than what OfficeSuite provides.

WPS Mobile Office is a small but significant step behind, while Docs to Go is far too flawed to be taken seriously as a viable option.

With that, you’re officially armed with all the necessary knowledge to make your decision. Grab the mobile office suite that best suits your needs — and be productive wherever you may go.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

Coming soon: Better geolocation Web data

Written by admin
January 8th, 2015

The W3C and OGC pledge to ease the path for developing location-enriched Web data

From ordering pizza online to pinpointing the exact location of a breaking news story, an overwhelming portion of data on the Web has geographic elements. Yet for Web developers, wrangling the most value from geospatial information remains an arduous task.

Now the standards body for the Web has partnered with the standards body for geographic information systems (GIS) to help make better use of the Web for sharing geospatial data.

Both the World Wide Web Consortium (W3C) and the Open Geospatial Consortium (OGC) have launched working groups devoted to the task. They are pledging to closely coordinate their activities and publish joint recommendations.

Adding geographic elements to data online in a meaningful way “can be done now, but it is difficult to link the two worlds together and to use the infrastructure of the Web effectively alongside the infrastructure of geospatial systems,” said Phil Archer, who is acting as data activity lead for the W3C working group.

A lack of standards is not the problem. “The problem is that there are too many,” he said. With this in mind, the two standards groups are developing a set of recommendations for how to best use existing standards together.

As much as 80 percent of data has some geospatial element to it, IT research firm Gartner has estimated. In the U.S. alone, geospatial services generate approximately $75 billion a year in annual revenue, according to the Boston Consulting Group.

Making use of geospatial data still can be a complex task for the programmer, however. An untold amount of developer time is frittered away trying to understand multiple formats and sussing out the best ways to bridge them together.

For GIS (geographic information system) software, the fundamental units of geospatial surface measurement are the point, line and polygon. Yet, people who want to use geographically enhanced data tend to think about locations in a fuzzier manner.

For instance, say someone wants to find a restaurant in the “Little Italy” section of a city, Archer explained. Because such neighborhoods are informally defined, they don’t have a specific grid of coordinates that could help in generating a definitive set of restaurants in that area.

“That sort of information is hard to get if you don’t have geospatial information and it is also hard to get if you only have geospatial information,” Archer said.

Much of the work the groups will do will be centered around bridging geolocational and non-geolocational data in better ways — work that the two groups agreed needed to be completed at a joint meeting last March in London.

The groups will build on previous research done in the realm of linked open data, an approach of formatting disparate sources of data so they can be easily interlinked.

The groups will also look at ways to better harness emerging standards, notably the W3C’s Semantic Sensor Network ontology and OGC’s GeoSPARQL.

The working groups plan to define their requirements within the next few months, and will issue best practices documents as early as by the end of the year.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

16 of the hottest IT skills for 2015

Written by admin
January 4th, 2015

2015 will bring new opportunities for professional growth and development, not to mention more money. But what specific skills will add the most value for your career advancement in the new year.

The Hottest IT and Tech Skills for 2015
What skills should IT professionals add to their toolbox to increase their compensation in 2015? To find out, CIO.com worked with David Foote, chief analyst and research officer with Foote Partners, to comb through the firm’s quarterly data to uncover what skills will lead to higher pay in the short term and help them navigate the tech industry for next career move in the long term.

Foote Partners uses a proprietary methodology to track and validate compensation data for tech workers. It collects data on 734 individual certified and noncertified IT skills. Of those skills, 384 are of the noncertified variety and the focus on this report.

Cloud Skills
Cloud adoption continues to accelerate as organizations large and small try to capitalize on cloud computing’s cost benefits. In fact, it’s become a mainstream in IT organizations. Cloud adoption among IT departments everywhere is somewhere near 90 percent for 2014. “Companies began discovering the cloud about four years ago and it’s been quite volatile in the last year. Will companies continue to invest in the cloud? The answer is ‘yes,’ ” according to Foote.

Although Foote Partners has found a 3 percent to a 3.5 percent drop in market value, Foote notes it’s an area with some unpredictability but it’s cyclical. “It’s a volatile marketplace when it comes to talent,” he says.

Architecture
Foote points out that as organizational complexity is increasing, businesses are becoming more aware of the value of a great architect and these roles are showing up with more frequency among his clients. The Open Group Architecture Framework (TOGAF) skills, in particular, are the most highly paid noncertified IT skill and a regular on the hot skills lists.

“We know a lot of companies are getting into architecture in a bigger way. They’re hiring more architects; they’re restructuring their enterprise architect departments. Their starting to see a lot of value and no one is really debating that you can never have too many talented architects in your business. This is not something you can ignore. Everyone is thinking that no matter what we do today, we have to always be thinking down the road — three years, five years or more. The people that do that for a living are architects,” says Foote.

Database/Big Data Skills
Big data is attractive to organizations for a number of reasons. Unfortunately, many of those reasons haven’t panned out. According to Foote, companies got caught up in the buzz and now they are taking a more conservative approach. That said, this is an area that Foote Partners expects to grow in 2015. Adding any of these skills to your skillset will make you more valuable to any employer looking to capitalize on the promise of big data.

Although it just missed their highest paying noncertified IT skills list, pay for data sciences skills are expected to increase into 2015. “This group [of skills] is in transition. There is still a big buzz factor around data sciences which will result in companies paying more for this skill, “says Foote.

Data management will increasingly be important as companies try to wrangle actionable data from their many disparate sources of data.

Applications Development Skills
Applications development is undoubtedly a hot skills area. Demand for both mobile and desktop developers continues to increase and this trend will continue well into 2015. However, Foote Partners data suggests that the three skills listed here are poised for significant growth in the coming year. It’s worth noting that JavaFX and user interface/experience design skills also made Foote Partners list of highest paying noncertified IT skills.

Organizations are more regularly refining their digital customer experience, making user interface and experience design crucial skills in the coming year.

JavaFX is coming on strong as it replaces Swing in the marketplace.

Agile programming is new to the noncertified IT skills list, but Foote predicts pay premium for this area to grow into 2015.

SAP and Enterprise Business Applications Skills
SAP is a global organization related to ERP applications ranging from business operations to CRM. Foote partners tracks nearly 93 SAP modules and have noticed a lot of fluctuation in value over the last year among these modules. However, according to Foote Partners data, SAP CO-PA, SAP FI-FSCM, SAP GTS and SAP SEM are all expected to be hot in 2015.

Security Skills
Security has come to the forefront in 2014 with organizations large and small being targeted by cybercriminals. The list of businesses attacked is long but includes some heavyweights like Sony, eBay and Target to name a few. Foote points out that cybersecurity is now part of today’s lexicon to both techies and consumers alike.

“Security is blown wide open. Cybersecurity has now become an issue that everyone sees as important. Inside cybersecurity skills and certifications there is a lot of activity. It’s gone mainstream. I think you’re going to see cybersecurity on this list for some time to come,” says Foote.

Management, Process and Methodology Skills
Project and program management are new to the list, but Foote Partners predict this area to be in high demand in 2015.

Foote emphasizes that fluctuations in pay premiums don’t tell the whole story. They also apply what they have learned from the data provided from the 2,648 employers that they work with. That’s why you may have noticed that some skills covered appear flat. Some of these make the list of hot skills because Foote Partners has uncovered some data or trend that will likely drive up pay in these areas in 2015.

“There is more than recent pay premium track record considered in our forecast list. We talk to a lot of people in the field making decisions about skills acquisition at their companies. We look at tech evolution and where we think skills consumption is heading and so forth,” says Foote.


 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

The greatest tech wins and epic comebacks of 2014

Written by admin
January 1st, 2015

From gigantic smartphones to virtual reality, here are the products, companies and ideas that emerged victorious in the tech world this year.

Refinement, not revolution
While 2014 didn’t bring much in the way of revolutionary technology, it was a great year for refinement. The products and services we’ve relied on for years became cheaper and more accessible, while once-difficult concepts like virtual reality and mobile wallets starte to look a little more practical. And if you look hard enough, you can even find some examples where the government didn’t screw everything up.

Here are the top 10 products, companies and ideas that emerged victorious in the tech world this year.

Microsoft’s new moves
Whether you loved or loathed Steve Ballmer, you’ve got to admit Microsoft has become a more exciting company since his departure. Under new CEO Satya Nadella, Microsoft has slain the sacred cows of Windows and Office, offering free versions of both on tablets and other mobile devices. We’ve seen Microsoft show a deep appreciation for other platforms as well, with new apps and integrations on Android and iPhone. The message? If you haven’t been paying attention to Microsoft lately, you might want to reconsider.

Apple Pay makes the mobile wallet work
Mobile payments had plenty of naysayers before the arrival of Apple Pay, as they wondered how paying at the checkout line with a smartphone could ever be easier than pulling out a credit card. Apple’s answer is simple: Pair the iPhone’s TouchID fingerprint reader with NFC, so users can pay without even looking at their phones or turning on the screen. Not only is that more efficient than a credit card, it’s way more secure because it never transmits the actual card number. Older solutions never quite got it right, and that’s why Apple Pay quickly became the mobile payments frontrunner.

PlayStation 4 asserts its dominance
While Microsoft hemmed and hawed over its Xbox strategy, Sony realized early on that it could take control of the console wars with lower pricing and a focus on gaming. That plan paid off this year, as the PlayStation 4 outsold the Xbox One in the United States for 10 months in a row. True, Microsoft had a strong November thanks to significant price drops, but chances are those cuts wouldn’t have happened if Sony hadn’t built up a commanding lead.

Validation for gigantic phones
Samsung was onto something when it launched the Galaxy Note in 2011, even if pundits failed to recognize it at the time. Three years later, even regular-sized phones from Samsung and LG have screens exceeding five inches, and Apple finally saw fit to super-size its iPhone lineup with 4.7-inch and 5.5-inch models. While there’s an argument to be made for smaller screens, the jumbo phone is here to stay.

Net neutrality protesters win this round
FCC Chairman (and former telecom lobbyist) Tom Wheeler probably expected some pushback when he proposed some alarmingly flaccid net neutrality rules earlier this year, but the actual response was overwhelming. The FCC received a record 3 million comments—most of them opposed to Wheeler’s proposal—and last month, President Barack Obama urged Wheeler to create stronger protections by reclassifying broadband as a phone-like utility. Even if the FCC makes a decision in the spring, as many expect, lawsuits could prolong the conflict for years. At least the public can feel good about making their voices heard.

Oculus takes Facebook’s money to make VR huge
Until March of this year, Oculus was chugging along as a grassroots effort, with big ambitions for virtual reality but not enough capital to see them through. That was before Facebook splashed the VR pot with a $2 billion acquisition. The move had plenty of detractors, but Facebook’s money allows Oculus to move faster, create better products, and maybe even finally bring virtual reality to the masses. If Facebook can keep its promises not to meddle too much, it might even be a way to win back some much-needed trust.

Winamp keeps on keeping on
After 15 years of kicking out the jams, Winamp seemed to be at the end of its rope last November. A notification informed users that the once-beloved MP3 player would go offline the following month, kicking off a final wave of nostalgia. But in January, Winamp got a reprieve, with a last-minute acquisition by Internet radio firm Radionomy. Winamp may never return to its glorious past, but at least it still has a future.

Cord-cutting gets real
With more people giving up their cable TV subscriptions or deciding not to have one in the first place, it’s getting harder for the pay TV industry to pretend that cord-cutting isn’t real. This year’s biggest acknowledgment of reality came courtesy of HBO, which now says it will launch a standalone streaming service in 2015. Showtime quickly followed suit. Expect this to become a trend as the expensive, bloated cable bundle reaches its tipping point.

Cloud storage gets dirt-cheap
If you’d written off cloud storage as being too expensive to contain all your precious digital belongings, 2014 has been a good year to reconsider. Microsoft kicked off the cloud storage price wars with 1TB for Office 365 subscribers, and later went fully unlimited. Google followed with reduced pricing for consumers and unlimited storage for enterprise users. And Dropbox, whose price per gigabyte had never been a bargain, upped its $10-per-month service from 100GB to 1TB. Add Amazon’s unlimited photo storage for Prime subscribers to the mix, and you’ve got plenty of cloud storage options on the cheap.

Supreme Court says no to warrantless phone search
The U.S. Supreme Court didn’t get everything right this year (see: Aereo). But at least the Justices had the sense to realize that the contents of your phone are just as personal and private as the belongings in your house. As such, law enforcement can’t search smartphones without a warrant. At a time of rapidly eroding digital privacy, the decision was a much-needed shot of sanity.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

The news follows the decision to release the controversial movie in some theaters on Christmas Day

The controversial movie “The Interview” is now available online through Google and Microsoft services as well as a Sony Pictures website, the companies announced separately on Wednesday.

This development follows Sony’s decision to screen the comedy in select U.S. cities on its original release date of Christmas Day after initially canceling those plans.

A satire that depicts how two U.S. journalists would carry out an assignment to assassinate North Korean leader Kim Jong-un, “The Interview” has been anything but a barrel of laughs for Sony, which produced the film.

Sony suffered a cyberattack that resulted in theft of emails containing sensitive information like actor salaries and plots of upcoming movies. Additionally, threats of violence against theaters that showed the film led Sony to cancel its theatrical release, a decision it later reversed.

Google, which made the movie available to either rent or buy through YouTube and Google Play, weighed the security concerns before agreeing to offer the movie, said David Drummond, Google’s senior vice president of corporate development and chief legal officer, in a blog post.

“Sony and Google agreed that we could not sit on the sidelines and allow a handful of people to determine the limits of free speech in another country (however silly the content might be),” he said, adding that Sony approached the company last week about making the film available online.

People with an Xbox game console, a Windows Phone and PCs and tablets running Windows 8 and 8.1 can either purchase or rent the movie, Microsoft said in a blog post that also touched on themes of freedom to explain its decision to sell the film.

“Our Constitution guarantees for each person the right to decide what books to read, what movies to watch, and even what games to play. In the 21st Century, there is no more important place for that right to be exercised than on the Internet,” wrote Brad Smith, Microsoft’s general counsel and executive vice president, legal and corporate affairs.

Microsoft and Google charge US$5.99 to rent the movie and $14.99 to buy it.


 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Google Go ventures into Android app development

Written by admin
December 12th, 2014

Google Go 1.4 adds official support for Android, as well as improved syntax and garbage collection

Google’s Go language, which is centered on developer productivity and concurrent programming, can now be officially used for Android application development.

The capability is new in version 1.4, released this week. “The most notable new feature in this release is official support for Android. Using the support in the core and the libraries in the golang.org/x/mobile repository, it is now possible to write simple Android apps using only Go code,” said Andrew Gerrand, lead on the Google Cloud Platform developer relations team, in a blog post. “At this stage, the support libraries are still nascent and under heavy development. Early adopters should expect a bumpy ride, but we welcome the community to get involved.”

Android commonly has leveraged Java programming on the Dalvik VM, with Dalvik replaced by ART (Android Run Time) in the recently released Android 5.0. OS. Open source Go, which features quick compilation to machine code, garbage collection, and concurrency mechanisms, expands options for Android developers. The upgrade can build binaries on ARM processors running Android, release notes state, and build a .so library to be loaded by an Android application using supporting packages in the mobile subrepository.

“Go is about making software simpler,” said Gerrand in an email, “so naturally, application development should be simpler in Go. The Go Android APIs are designed for things like drawing on the screen, producing sounds, and handling touch events, which makes it a great solution for developing simple applications, like games.”

Android could help Go grow, said analyst Stephen O’Grady, of RedMonk: “The Android support is very interesting, as it could eventually benefit the language much the same way Java has from the growth of the mobile platform.”

Beyond the Android capabilities, version 1.4 improves garbage collection and features support for ARM processors on Native Client cross-platform technology, as well as for AMD64 on Plan 9. A fully concurrent collector will come in the next few releases.

Introduced in 2009, the language has been gaining adherents lately. Go 1.3, the predecessor to 1.4, arrived six months ago. Go, O’Grady said, “is growing at a healthy pace. It was just outside our top 20 the last time we ran our rankings [in June], and I would not be surprised to see it in the Top 20 when we run them in January.”

Version 1.4 contains “a small language change, support for more operating systems and processor architectures and improvements to the tool chain and libraries,” Gerrand said. It maintains backward compatibility with previous releases. “Most programs will run about the same speed or slightly faster in 1.4 than in 1.3; some will be slightly slower. There are many changes, making it hard to be precise about what to expect.”

The change to the language is a tweak to the syntax of for-range loops, said Gerrand. “You may now write for range s { to loop over each item from s, without having to assign the value, loop index, or map key.” The go command, meanwhile, has a new subcommand, called go generate, to automate the running of tools generating source code before compilation. The Go project with version 1.4 has been moved from Mercurial to Git for source code control.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Blowing up entrenched business models and picking up the profits that spill onto the floor is a time-honored tradition in tech, these days known by the cliche of the moment, “disruption.” This year everyone was trying to push back against those upstarts, whether by buying them like Facebook did, reorganizing to compete with them like HP and Microsoft have done, or just plain going out against them guns blazing, as it seemed that every city and taxi company did with Uber. European courts fought the disruptive effect Google search has had on our very sense of the historical record. But meanwhile, legions of net neutrality supporters in the US spoke up to save the Internet’s core value of disruption against the oligopoly of a handful of communications carriers. Here are our picks for the top stories of a very, well, disruptive year.

Nadella aims Microsoft toward relevancy in a post-PC world
Taking over from Steve Ballmer in February, CEO Satya Nadella faced several uncomfortable truths, among them: Windows powers only 15 percent of all computing devices worldwide, including smartphones, tablets and PCs, meaning Microsoft is no longer at the center of most people’s computing experience. Nadella says he wants Microsoft to be the productivity and platform company for a “mobile first, cloud first world.” Under Nadella, Microsoft has launched Office for the iPad, embraced open source software for its Azure cloud and launched the beta for Windows 10, which promises to smooth out Windows 8’s confusing, hybrid user interface. Shortly after closing the Nokia acquisition he inherited, Nadella announced 18,000 job cuts, 14 percent of its global staff. The bulk of those cuts are in Nokia, which has been relegated to the “other” market share category in smartphones. Microsoft’s sales looked good last quarter, jumping 25 percent year-over-year to $23.2 billion, though profit was hurt by the Nokia buy. Nadella claimed the company is “innovating faster,” which had better be true if he is to succeed.

HP says breaking up is hard, but necessary
Agility appears to be more important than size these days. In an about-face from the direction CEO Meg Whitman set three years ago, Hewlett-Packard announced in October that it will split up, divorcing its PC and printer operations from its enterprise business. When Whitman took the reins from former HP chief Leo Apotheker in 2011, she renounced his idea to split up the venerable Silicon Valley company, saying PCs were key to long-term relationships with customers. But shedding assets is becoming a common strategy for aging tech giants. IBM has focused on enterprise technology and services after selling first its PC operations years ago, and then its server business this year, to Lenovo, and agreeing in October to pay GlobalFoundries $1.5 billion to take over money-losing chip facilities. Symantec announced this year that it would spin off its software storage business, the bulk of which it acquired 10 years ago from Veritas Software for $13.5 billion. The big question for HP is whether it can avoid alienating users and distracting its hundreds of thousands of employees.

Uber’s bumpy ride shakes up the “sharing” economy
Legal challenges and executives behaving badly marked the ascendancy of Uber this year as much as its explosive growth and sky-high valuation. The startup’s hard-driving, take-no-prisoners culture has made it an unlikely poster child for the innocuous—and perhaps misleadingly labeled—“sharing” economy. Announcing the company’s latest billion-dollar cash injection in December, CEO Travis Kalanick bragged that Uber had launched operations in 190 cities and 29 countries this year. The service is now valued at $40 billion. But the company’s army of private drivers face legal challenges, inquiries and preliminary injunctions against operating, from Germany and the UK to various US states. Executives have made matters worse by threatening to dig up dirt on critical journalists and bragging about a tool called “god view” that lets employees access rider logs without permission. Rival app-based ride services like Lyft and Sidecar, whose operations are also the target of inquiries, are distancing themselves from Uber. Added to all this, there are complaints about the legality of other sorts of so-called sharing services, like apartment-rental site Airbnb, which has spawned not just opportunities for regular folks with an extra room and a hospitable nature, but created a class of real-estate investors who are de facto hoteliers. All this suggests that Web-based companies seeking a “share” of profits using middleman tech platforms to disrupt highly regulated businesses like taxis and lodging have some real battles against entrenched interests still to fight.

Facebook gambles $16 billion on WhatsApp
Established companies are snapping up upstarts at a pace not seen since the dot-com boom days, but in February Facebook’s plan to buy WhatsApp for $16 billion had jaws dropping at the price tag. WhatsApp has hit about a half billion users with its mobile messaging alternative to old-school carriers. Facebook already had a chat feature, as well as a stand-alone mobile app called Messenger. But people don’t use them for quick back and forth conversations, as CEO Mark Zuckerberg has acknowledged. At the Mobile World Congress in Barcelona, he confessed that he could not prove in charts and figures that WhatsApp is worth the money he spent, but said that not many companies in the world have a chance at cracking the billion-user mark, and that in itself is incredibly valuable.

Mt Gox implodes, deflating Bitcoin hype
Last year, Bitcoin seemed poised to disrupt conventional currencies. But this year the high-flying cryptocurrency hit some turbulence. The largest Bitcoin exchange in the world, Tokyo-based Mt Gox, fell to earth amid tears and lawsuits after an apparent hack cost the company about 750,000 bitcoins worth about $474 million. The company said a flaw in the Bitcoin software allowed an unknown party to steal the digital currency. A few weeks later Flexcoin, a smaller site, closed after it got hacked. The closures sent tremors of fear through the fledgling Bitcoin market. The leaders of Coinbase, Kraken, Bitstamp, BTC China, Blockchain and Circle all signed a statement lambasting Mt Gox for its “failings.” But the incidents took the luster off Bitcoin. Still, New York’s proposed Bitcoin regulations may establish a legal framework, and confidence, to help exchanges grow in one of the world’s biggest financial centers. Bitcoin concepts may also spur spinoff technology. A company called Blockstream is pursuing ideas to use Bitcoin’s so-called blockchain, a distributed, public ledger, as the basis for a platform for all sorts of transactional applications.

Apple Pay starts to remake mobile payments
Apple’s ascendance to the world’s most valuable company came on top of market-defining products like the iPod, iTunes, the iPhone and the iPad. This year, it was not the iPhone 6 or the as-yet unreleased Apple Watch that came close to redefining a product category—it was Apple Pay. Apple Pay requires an NFC-enabled Apple device, which means an iPhone 6 or 6 Plus, but by early next year, Apple Watch as well. Businesses need NFC-equipped payment terminals. With Apply Pay, you can make a credit or debit card payment simply by tapping your iPhone to the NFC chip reader embedded in a payment terminal. As you tap, you put your finger on the iPhone 6’s biometric fingerprint reader. Apple was careful to line up partners: while Google stumbled trying to get support for its Wallet, more than 500 banks and all major credit card companies are working with Apple Pay. The potential security benefits top it off: When you enter your credit or debit card number, Apple replaces it with a unique token that it stores encrypted. Your information is never stored on your device or in the cloud.

Alibaba’s IPO marks a new era for Chinese brands
In their first day of trading on the New York Stock Exchange in September, Alibaba shares opened at $92.70, 35 percent over the $68 initial public offering price, raking in $21.8 billion and making it the biggest tech IPO ever. Alibaba is an e-commerce behemoth in China, now looking to expand globally. But don’t expect a direct challenge to Amazon right away. Its strategy for international dominance depends not only on broad e-commerce, but also on carving out different niche marketplaces. Shares three months after its opening are going for about $10 more, suggesting that shareholders have faith in that strategy. The IPO also marked the ascendancy of Chinese brands. After scooping up IBM’s PC business years ago, and this year spending $2.3 billion for IBM’s server business as well as $2.9 billion for Motorola, Lenovo is the world’s number one PC company and number three smartphone company. Meanwhile Xiaomi, the “Apple of China,” has become the world’s number-four smartphone vendor.

Regin and the continuing saga of the surveillance state
Symantec’s shocking report on the Regin malware in November opened the latest chapter in the annals of international espionage. Since at least 2008, Regin has targeted mainly GSM cellular networks to spy on governments, infrastructure operators, research institutions, corporations, and private individuals. It can steal passwords, log keystrokes and read, write, move and copy files. The sophistication of the malware suggests that, like the Stuxnet worm discovered in 2010, it was developed by one or several nation-states, quite possibly the U.S. It has spread to at least 10 countries, mainly Russia and Saudi Arabia, as well as Mexico, Ireland, India, Afghanistan, Iran, Belgium, Austria and Pakistan. If Regin really is at least six years old, it means that sophisticated surveillance tools are able to avoid detection by security products for years, a chilling thought for anyone trying to protect his data.

EU ‘right to be forgotten’ ruling challenges Google to edit history
The EU’s Court of Justice’s so-called right to be forgotten ruling in May means that Google and other search engine companies face the mountainous task of investigating and potentially deleting links to outdated or incorrect information about a person if a complaint is made. The ruling came in response to a complaint lodged by Spanish national insisting that Google delete links to a 1998 newspaper article that contained an announcement for a real-estate auction related to the recovery of social security debts owed by him. The complaint noted the issue had been resolved. But while EU data-privacy officials cheer, free-speech advocates say the ruling’s language means that people can use it to whitewash their history, deleting even factually correct stories from search results. As of mid-November, Google had reviewed about 170,000 requests to delist search results that covered over 580,000 links. The headaches are just starting: Now the EU says the delinking must be applied to all international domains, not just sites within the region.

Obama weighs in as FCC goes back to the drawing boards on net neutrality
In January, a U.S. appeals court struck down the FCC’s 2011 regulations requiring Internet providers to treat all traffic equally. The court said the FCC did not have the authority to enact the rules, challenged in a lawsuit brought by Verizon. The ruling reignited the net neutrality debate, with FCC Chairman Tom Wheeler proposing new rules in April. President Obama in November made his strongest statement on net neutrality to date, urging the FCC to reclassify broadband as a regulated utility, imposing telephone-style regulations. Obama’s move, which critics say is an unprecedented intrusion on an independent government agency, puts political pressure on Wheeler, who reportedly favors a less regulatory approach. The proposal from Wheeler earlier this year stopped short of reclassification, and allowed broadband providers to engage in “commercially reasonable” traffic management. Public comments on Wheeler’s proposal had hit nearly 4 million by September. The ball is now back in Wheeler’s court, as he negotiates a resolution to the whole affair with his fellow commissioners.


 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Full speed ahead for 802.11ac Gigabit Wi-Fi

Written by admin
December 5th, 2014

802.11n takes a back seat as Wave 1 and 2 802.11ac wireless LAN products drive rollouts

Last December customers were peppering wireless LAN vendors with questions about whether to upgrade to the pre-standard-but-certified 802.11ac products flooding the market or hold off until 2015, when more powerful “Wave 2” Gigabit Wifi gear was expected to become prevalent.

A year later, even though Wave 2 products have begun trickling into the market, many IT shops seem less preoccupied with Wave 2 and more focused on installing the Wave 1 11ac routers, access points and other products at hand. After all, this first wave of 11ac is at least a couple times faster than last generation 11n, plus has more range, boasts better power efficiency and is more secure. And even Apple’s new iPhone 6 and 6 Plus support it.

Surprisingly, 802.11ac products aren’t much more expensive than 11n ones, if at all. That might help explain why market watcher Infonetics reported in September that “802.11ac access point penetration has nearly doubled every quarter and is starting to cannibalize 802.11n.” And the company is optimistic that 11ac and Wave 2 products, plus carrier interest in the technology, will give the WLAN market a boost in 2015.

Ruckus Wireless, which sells WLAN gear to enterprises and carriers, sees customers taking a middle-of-the-road approach, buying some 11ac products now and figuring to buy more when Wave 2 products are plentiful. Ruckus is looking to let customers who do invest in 11ac now upgrade products to Wave 2 at little to no cost down the road.

Aruba Networks, which rolled out 802.11ac access points in May of 2013 to deliver more than 1Gbps throughput, is now shipping more 11ac than 11n gear.

“We’re definitely seeing customers making the shift — almost all of them are either actively looking at ‘ac’ or are starting to think about it in the next year,” says Christian Gilby, director of enterprise product marketing and owner of the @get11ac Twitter handle. “What’s really driving it is the explosion of devices. From a standards point of view, there are [more than 870] devices WiFi Alliance-certified for ‘ac’.”

Many of those devices were certified before the standard was finalized and do not support the performance-enhancing options that so-called Wave 2 products will feature. This includes support for multi-user MIMO, which allows transmission of multiple spatial streams to multiple clients at the same time. It’s seen as being akin to the transition from shared to switched Ethernet.

Wave 2 chipsets and gear have begun trickling out, with Qualcomm being among the latest. But WiFi Alliance certification could still be quite a few months away – maybe even into 2016 — and that could make buyers expecting interoperability hesitate.

The real holdup for Wave 2, though, says Gilby, is that it will require a chipset change

in client devices such as laptops and tablets. “You really need the bulk of the clients to get upgraded before you see the benefits,” he says. (A recently released survey commissioned by network and application monitoring and analysis company WildPackets echoed Gilby’s sentiments and found that 41% of those surveyed said that less than 10% of their organization’s client devices supported 11ac.)
I think we’ll see some enterprise products on the AP side in 2015…in fact, I’m pretty sure we will.”

Christian Gilby, director of enterprise product marketing, Aruba Networks
Gilby adds that while Wave 2 products will support double the wireless channel width, the government will first need to free up more frequencies to exploit this. Customers will also need to make Ethernet switch upgrades on the back-end to handle the higher speeds on the wireless side, and new 2.5Gbps and 5Gbps standards are in the works.

Nevertheless it sounds as though enterprise Wave 2 802.11ac products will start spilling forth next year, with high-density applications expected to be the initial use for them. “There’s been some stuff on the consumer side… I think we’ll see some enterprise products on the AP side in 2015…in fact, I’m pretty sure we will,” said Gilby.
Ruckus ZoneFlex R600 802.11ac access point Ruckus Wireless

Ruckus ZoneFlex R600 802.11ac access point
Ruckus Wireless vows to become one of the first vendors to market with a Wave 2 product in 2015 and has already had success with it in the labs using Qualcomm chips, says VP of Corporate Marketing David Callisch. Though he says vendors will really need to work hard on their antenna structures to make Wave 2 work well. “As the WiFi standards become more complex, having more sophisticated RF control is beneficial, especially when you’re talking about having so many streams and wider channels.” He says that “11ac is where it’s at… Customers need the density. WiFi isn’t about the coverage anymore, it’s about capacity.”

Like Gilby, Callisch says the big hold-up with 11ac Wave 2 advancing is on the client side, where vendors are always looking to squeeze costs. Wave 2 is backwards compatible with existing clients, but still…

“It’s expensive to put ‘ac’ into clients,” he says. “If you adopted Wave 2 products today you really couldn’t get what you need to take full advantage of it. But that will change and pretty quickly.”

RELATED: Just another Wacky Week in Wi-Fi
As for how customers are using 11ac now, Gilby says where they have already installed 11n products on the 5GHz band, they are starting to do AP-for-AP swap-outs. It can be trickier for those looking to move from 2.4GHz 11n set-ups.
Aruba Series 200 802.11ac APs Aruba Networks

Aruba Series 200 802.11ac APs
802.11ac is also catching on among small and midsize organizations, which companies such as Aruba (with its 200 series APs) have started to target more aggressively. Many of these outfits opt for controller-less networks, with the option of upgrading to controllers down the road if their businesses grow.

It’s not too soon to look beyond 11ac, either. The IEEE approved the 802.11ad (WiGig) standard back in early 2013 for high-speed networking in the unlicensed 60GHz radio spectrum band, and the WiFi Alliance will likely be establishing a certification program for this within the next year or so.

Aruba’s Dorothy Stanley, head of standards strategy, says 11ad is “not really about replacing the W-Fi infrastructure, but augmenting it for certain apps.”

She says it could have peer-to-peer uses, and cites frequently-talked about scenarios such as downloading a movie or uploading photos at an airport kiosk. These are applications that would require only short-range connections but involve heavy data exchanges.

Stanley adds that developing and manufacturing 11ad products has its challenges. Nevertheless, big vendors such as Cisco and Qualcomm (via its Wilocity buyout) have pledged support for the technology.

“It’s something everybody is looking at and trying to understand where its sweet spot is,” Stanley says. “The promise of it is additional spectrum for wireless communications.”

Another IEEE standards effort dubbed 802.11ax is the most likely successor to 11ac, and has a focus on physical and media-access layer techniques that will result in higher efficiency in wireless communications.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

<