Archive for the ‘ Tech ’ Category

Google Go ventures into Android app development

Written by admin
December 12th, 2014

Google Go 1.4 adds official support for Android, as well as improved syntax and garbage collection

Google’s Go language, which is centered on developer productivity and concurrent programming, can now be officially used for Android application development.

The capability is new in version 1.4, released this week. “The most notable new feature in this release is official support for Android. Using the support in the core and the libraries in the repository, it is now possible to write simple Android apps using only Go code,” said Andrew Gerrand, lead on the Google Cloud Platform developer relations team, in a blog post. “At this stage, the support libraries are still nascent and under heavy development. Early adopters should expect a bumpy ride, but we welcome the community to get involved.”

Android commonly has leveraged Java programming on the Dalvik VM, with Dalvik replaced by ART (Android Run Time) in the recently released Android 5.0. OS. Open source Go, which features quick compilation to machine code, garbage collection, and concurrency mechanisms, expands options for Android developers. The upgrade can build binaries on ARM processors running Android, release notes state, and build a .so library to be loaded by an Android application using supporting packages in the mobile subrepository.

“Go is about making software simpler,” said Gerrand in an email, “so naturally, application development should be simpler in Go. The Go Android APIs are designed for things like drawing on the screen, producing sounds, and handling touch events, which makes it a great solution for developing simple applications, like games.”

Android could help Go grow, said analyst Stephen O’Grady, of RedMonk: “The Android support is very interesting, as it could eventually benefit the language much the same way Java has from the growth of the mobile platform.”

Beyond the Android capabilities, version 1.4 improves garbage collection and features support for ARM processors on Native Client cross-platform technology, as well as for AMD64 on Plan 9. A fully concurrent collector will come in the next few releases.

Introduced in 2009, the language has been gaining adherents lately. Go 1.3, the predecessor to 1.4, arrived six months ago. Go, O’Grady said, “is growing at a healthy pace. It was just outside our top 20 the last time we ran our rankings [in June], and I would not be surprised to see it in the Top 20 when we run them in January.”

Version 1.4 contains “a small language change, support for more operating systems and processor architectures and improvements to the tool chain and libraries,” Gerrand said. It maintains backward compatibility with previous releases. “Most programs will run about the same speed or slightly faster in 1.4 than in 1.3; some will be slightly slower. There are many changes, making it hard to be precise about what to expect.”

The change to the language is a tweak to the syntax of for-range loops, said Gerrand. “You may now write for range s { to loop over each item from s, without having to assign the value, loop index, or map key.” The go command, meanwhile, has a new subcommand, called go generate, to automate the running of tools generating source code before compilation. The Go project with version 1.4 has been moved from Mercurial to Git for source code control.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

Blowing up entrenched business models and picking up the profits that spill onto the floor is a time-honored tradition in tech, these days known by the cliche of the moment, “disruption.” This year everyone was trying to push back against those upstarts, whether by buying them like Facebook did, reorganizing to compete with them like HP and Microsoft have done, or just plain going out against them guns blazing, as it seemed that every city and taxi company did with Uber. European courts fought the disruptive effect Google search has had on our very sense of the historical record. But meanwhile, legions of net neutrality supporters in the US spoke up to save the Internet’s core value of disruption against the oligopoly of a handful of communications carriers. Here are our picks for the top stories of a very, well, disruptive year.

Nadella aims Microsoft toward relevancy in a post-PC world
Taking over from Steve Ballmer in February, CEO Satya Nadella faced several uncomfortable truths, among them: Windows powers only 15 percent of all computing devices worldwide, including smartphones, tablets and PCs, meaning Microsoft is no longer at the center of most people’s computing experience. Nadella says he wants Microsoft to be the productivity and platform company for a “mobile first, cloud first world.” Under Nadella, Microsoft has launched Office for the iPad, embraced open source software for its Azure cloud and launched the beta for Windows 10, which promises to smooth out Windows 8’s confusing, hybrid user interface. Shortly after closing the Nokia acquisition he inherited, Nadella announced 18,000 job cuts, 14 percent of its global staff. The bulk of those cuts are in Nokia, which has been relegated to the “other” market share category in smartphones. Microsoft’s sales looked good last quarter, jumping 25 percent year-over-year to $23.2 billion, though profit was hurt by the Nokia buy. Nadella claimed the company is “innovating faster,” which had better be true if he is to succeed.

HP says breaking up is hard, but necessary
Agility appears to be more important than size these days. In an about-face from the direction CEO Meg Whitman set three years ago, Hewlett-Packard announced in October that it will split up, divorcing its PC and printer operations from its enterprise business. When Whitman took the reins from former HP chief Leo Apotheker in 2011, she renounced his idea to split up the venerable Silicon Valley company, saying PCs were key to long-term relationships with customers. But shedding assets is becoming a common strategy for aging tech giants. IBM has focused on enterprise technology and services after selling first its PC operations years ago, and then its server business this year, to Lenovo, and agreeing in October to pay GlobalFoundries $1.5 billion to take over money-losing chip facilities. Symantec announced this year that it would spin off its software storage business, the bulk of which it acquired 10 years ago from Veritas Software for $13.5 billion. The big question for HP is whether it can avoid alienating users and distracting its hundreds of thousands of employees.

Uber’s bumpy ride shakes up the “sharing” economy
Legal challenges and executives behaving badly marked the ascendancy of Uber this year as much as its explosive growth and sky-high valuation. The startup’s hard-driving, take-no-prisoners culture has made it an unlikely poster child for the innocuous—and perhaps misleadingly labeled—“sharing” economy. Announcing the company’s latest billion-dollar cash injection in December, CEO Travis Kalanick bragged that Uber had launched operations in 190 cities and 29 countries this year. The service is now valued at $40 billion. But the company’s army of private drivers face legal challenges, inquiries and preliminary injunctions against operating, from Germany and the UK to various US states. Executives have made matters worse by threatening to dig up dirt on critical journalists and bragging about a tool called “god view” that lets employees access rider logs without permission. Rival app-based ride services like Lyft and Sidecar, whose operations are also the target of inquiries, are distancing themselves from Uber. Added to all this, there are complaints about the legality of other sorts of so-called sharing services, like apartment-rental site Airbnb, which has spawned not just opportunities for regular folks with an extra room and a hospitable nature, but created a class of real-estate investors who are de facto hoteliers. All this suggests that Web-based companies seeking a “share” of profits using middleman tech platforms to disrupt highly regulated businesses like taxis and lodging have some real battles against entrenched interests still to fight.

Facebook gambles $16 billion on WhatsApp
Established companies are snapping up upstarts at a pace not seen since the dot-com boom days, but in February Facebook’s plan to buy WhatsApp for $16 billion had jaws dropping at the price tag. WhatsApp has hit about a half billion users with its mobile messaging alternative to old-school carriers. Facebook already had a chat feature, as well as a stand-alone mobile app called Messenger. But people don’t use them for quick back and forth conversations, as CEO Mark Zuckerberg has acknowledged. At the Mobile World Congress in Barcelona, he confessed that he could not prove in charts and figures that WhatsApp is worth the money he spent, but said that not many companies in the world have a chance at cracking the billion-user mark, and that in itself is incredibly valuable.

Mt Gox implodes, deflating Bitcoin hype
Last year, Bitcoin seemed poised to disrupt conventional currencies. But this year the high-flying cryptocurrency hit some turbulence. The largest Bitcoin exchange in the world, Tokyo-based Mt Gox, fell to earth amid tears and lawsuits after an apparent hack cost the company about 750,000 bitcoins worth about $474 million. The company said a flaw in the Bitcoin software allowed an unknown party to steal the digital currency. A few weeks later Flexcoin, a smaller site, closed after it got hacked. The closures sent tremors of fear through the fledgling Bitcoin market. The leaders of Coinbase, Kraken, Bitstamp, BTC China, Blockchain and Circle all signed a statement lambasting Mt Gox for its “failings.” But the incidents took the luster off Bitcoin. Still, New York’s proposed Bitcoin regulations may establish a legal framework, and confidence, to help exchanges grow in one of the world’s biggest financial centers. Bitcoin concepts may also spur spinoff technology. A company called Blockstream is pursuing ideas to use Bitcoin’s so-called blockchain, a distributed, public ledger, as the basis for a platform for all sorts of transactional applications.

Apple Pay starts to remake mobile payments
Apple’s ascendance to the world’s most valuable company came on top of market-defining products like the iPod, iTunes, the iPhone and the iPad. This year, it was not the iPhone 6 or the as-yet unreleased Apple Watch that came close to redefining a product category—it was Apple Pay. Apple Pay requires an NFC-enabled Apple device, which means an iPhone 6 or 6 Plus, but by early next year, Apple Watch as well. Businesses need NFC-equipped payment terminals. With Apply Pay, you can make a credit or debit card payment simply by tapping your iPhone to the NFC chip reader embedded in a payment terminal. As you tap, you put your finger on the iPhone 6’s biometric fingerprint reader. Apple was careful to line up partners: while Google stumbled trying to get support for its Wallet, more than 500 banks and all major credit card companies are working with Apple Pay. The potential security benefits top it off: When you enter your credit or debit card number, Apple replaces it with a unique token that it stores encrypted. Your information is never stored on your device or in the cloud.

Alibaba’s IPO marks a new era for Chinese brands
In their first day of trading on the New York Stock Exchange in September, Alibaba shares opened at $92.70, 35 percent over the $68 initial public offering price, raking in $21.8 billion and making it the biggest tech IPO ever. Alibaba is an e-commerce behemoth in China, now looking to expand globally. But don’t expect a direct challenge to Amazon right away. Its strategy for international dominance depends not only on broad e-commerce, but also on carving out different niche marketplaces. Shares three months after its opening are going for about $10 more, suggesting that shareholders have faith in that strategy. The IPO also marked the ascendancy of Chinese brands. After scooping up IBM’s PC business years ago, and this year spending $2.3 billion for IBM’s server business as well as $2.9 billion for Motorola, Lenovo is the world’s number one PC company and number three smartphone company. Meanwhile Xiaomi, the “Apple of China,” has become the world’s number-four smartphone vendor.

Regin and the continuing saga of the surveillance state
Symantec’s shocking report on the Regin malware in November opened the latest chapter in the annals of international espionage. Since at least 2008, Regin has targeted mainly GSM cellular networks to spy on governments, infrastructure operators, research institutions, corporations, and private individuals. It can steal passwords, log keystrokes and read, write, move and copy files. The sophistication of the malware suggests that, like the Stuxnet worm discovered in 2010, it was developed by one or several nation-states, quite possibly the U.S. It has spread to at least 10 countries, mainly Russia and Saudi Arabia, as well as Mexico, Ireland, India, Afghanistan, Iran, Belgium, Austria and Pakistan. If Regin really is at least six years old, it means that sophisticated surveillance tools are able to avoid detection by security products for years, a chilling thought for anyone trying to protect his data.

EU ‘right to be forgotten’ ruling challenges Google to edit history
The EU’s Court of Justice’s so-called right to be forgotten ruling in May means that Google and other search engine companies face the mountainous task of investigating and potentially deleting links to outdated or incorrect information about a person if a complaint is made. The ruling came in response to a complaint lodged by Spanish national insisting that Google delete links to a 1998 newspaper article that contained an announcement for a real-estate auction related to the recovery of social security debts owed by him. The complaint noted the issue had been resolved. But while EU data-privacy officials cheer, free-speech advocates say the ruling’s language means that people can use it to whitewash their history, deleting even factually correct stories from search results. As of mid-November, Google had reviewed about 170,000 requests to delist search results that covered over 580,000 links. The headaches are just starting: Now the EU says the delinking must be applied to all international domains, not just sites within the region.

Obama weighs in as FCC goes back to the drawing boards on net neutrality
In January, a U.S. appeals court struck down the FCC’s 2011 regulations requiring Internet providers to treat all traffic equally. The court said the FCC did not have the authority to enact the rules, challenged in a lawsuit brought by Verizon. The ruling reignited the net neutrality debate, with FCC Chairman Tom Wheeler proposing new rules in April. President Obama in November made his strongest statement on net neutrality to date, urging the FCC to reclassify broadband as a regulated utility, imposing telephone-style regulations. Obama’s move, which critics say is an unprecedented intrusion on an independent government agency, puts political pressure on Wheeler, who reportedly favors a less regulatory approach. The proposal from Wheeler earlier this year stopped short of reclassification, and allowed broadband providers to engage in “commercially reasonable” traffic management. Public comments on Wheeler’s proposal had hit nearly 4 million by September. The ball is now back in Wheeler’s court, as he negotiates a resolution to the whole affair with his fellow commissioners.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

Full speed ahead for 802.11ac Gigabit Wi-Fi

Written by admin
December 5th, 2014

802.11n takes a back seat as Wave 1 and 2 802.11ac wireless LAN products drive rollouts

Last December customers were peppering wireless LAN vendors with questions about whether to upgrade to the pre-standard-but-certified 802.11ac products flooding the market or hold off until 2015, when more powerful “Wave 2” Gigabit Wifi gear was expected to become prevalent.

A year later, even though Wave 2 products have begun trickling into the market, many IT shops seem less preoccupied with Wave 2 and more focused on installing the Wave 1 11ac routers, access points and other products at hand. After all, this first wave of 11ac is at least a couple times faster than last generation 11n, plus has more range, boasts better power efficiency and is more secure. And even Apple’s new iPhone 6 and 6 Plus support it.

Surprisingly, 802.11ac products aren’t much more expensive than 11n ones, if at all. That might help explain why market watcher Infonetics reported in September that “802.11ac access point penetration has nearly doubled every quarter and is starting to cannibalize 802.11n.” And the company is optimistic that 11ac and Wave 2 products, plus carrier interest in the technology, will give the WLAN market a boost in 2015.

Ruckus Wireless, which sells WLAN gear to enterprises and carriers, sees customers taking a middle-of-the-road approach, buying some 11ac products now and figuring to buy more when Wave 2 products are plentiful. Ruckus is looking to let customers who do invest in 11ac now upgrade products to Wave 2 at little to no cost down the road.

Aruba Networks, which rolled out 802.11ac access points in May of 2013 to deliver more than 1Gbps throughput, is now shipping more 11ac than 11n gear.

“We’re definitely seeing customers making the shift — almost all of them are either actively looking at ‘ac’ or are starting to think about it in the next year,” says Christian Gilby, director of enterprise product marketing and owner of the @get11ac Twitter handle. “What’s really driving it is the explosion of devices. From a standards point of view, there are [more than 870] devices WiFi Alliance-certified for ‘ac’.”

Many of those devices were certified before the standard was finalized and do not support the performance-enhancing options that so-called Wave 2 products will feature. This includes support for multi-user MIMO, which allows transmission of multiple spatial streams to multiple clients at the same time. It’s seen as being akin to the transition from shared to switched Ethernet.

Wave 2 chipsets and gear have begun trickling out, with Qualcomm being among the latest. But WiFi Alliance certification could still be quite a few months away – maybe even into 2016 — and that could make buyers expecting interoperability hesitate.

The real holdup for Wave 2, though, says Gilby, is that it will require a chipset change

in client devices such as laptops and tablets. “You really need the bulk of the clients to get upgraded before you see the benefits,” he says. (A recently released survey commissioned by network and application monitoring and analysis company WildPackets echoed Gilby’s sentiments and found that 41% of those surveyed said that less than 10% of their organization’s client devices supported 11ac.)
I think we’ll see some enterprise products on the AP side in 2015…in fact, I’m pretty sure we will.”

Christian Gilby, director of enterprise product marketing, Aruba Networks
Gilby adds that while Wave 2 products will support double the wireless channel width, the government will first need to free up more frequencies to exploit this. Customers will also need to make Ethernet switch upgrades on the back-end to handle the higher speeds on the wireless side, and new 2.5Gbps and 5Gbps standards are in the works.

Nevertheless it sounds as though enterprise Wave 2 802.11ac products will start spilling forth next year, with high-density applications expected to be the initial use for them. “There’s been some stuff on the consumer side… I think we’ll see some enterprise products on the AP side in 2015…in fact, I’m pretty sure we will,” said Gilby.
Ruckus ZoneFlex R600 802.11ac access point Ruckus Wireless

Ruckus ZoneFlex R600 802.11ac access point
Ruckus Wireless vows to become one of the first vendors to market with a Wave 2 product in 2015 and has already had success with it in the labs using Qualcomm chips, says VP of Corporate Marketing David Callisch. Though he says vendors will really need to work hard on their antenna structures to make Wave 2 work well. “As the WiFi standards become more complex, having more sophisticated RF control is beneficial, especially when you’re talking about having so many streams and wider channels.” He says that “11ac is where it’s at… Customers need the density. WiFi isn’t about the coverage anymore, it’s about capacity.”

Like Gilby, Callisch says the big hold-up with 11ac Wave 2 advancing is on the client side, where vendors are always looking to squeeze costs. Wave 2 is backwards compatible with existing clients, but still…

“It’s expensive to put ‘ac’ into clients,” he says. “If you adopted Wave 2 products today you really couldn’t get what you need to take full advantage of it. But that will change and pretty quickly.”

RELATED: Just another Wacky Week in Wi-Fi
As for how customers are using 11ac now, Gilby says where they have already installed 11n products on the 5GHz band, they are starting to do AP-for-AP swap-outs. It can be trickier for those looking to move from 2.4GHz 11n set-ups.
Aruba Series 200 802.11ac APs Aruba Networks

Aruba Series 200 802.11ac APs
802.11ac is also catching on among small and midsize organizations, which companies such as Aruba (with its 200 series APs) have started to target more aggressively. Many of these outfits opt for controller-less networks, with the option of upgrading to controllers down the road if their businesses grow.

It’s not too soon to look beyond 11ac, either. The IEEE approved the 802.11ad (WiGig) standard back in early 2013 for high-speed networking in the unlicensed 60GHz radio spectrum band, and the WiFi Alliance will likely be establishing a certification program for this within the next year or so.

Aruba’s Dorothy Stanley, head of standards strategy, says 11ad is “not really about replacing the W-Fi infrastructure, but augmenting it for certain apps.”

She says it could have peer-to-peer uses, and cites frequently-talked about scenarios such as downloading a movie or uploading photos at an airport kiosk. These are applications that would require only short-range connections but involve heavy data exchanges.

Stanley adds that developing and manufacturing 11ad products has its challenges. Nevertheless, big vendors such as Cisco and Qualcomm (via its Wilocity buyout) have pledged support for the technology.

“It’s something everybody is looking at and trying to understand where its sweet spot is,” Stanley says. “The promise of it is additional spectrum for wireless communications.”

Another IEEE standards effort dubbed 802.11ax is the most likely successor to 11ac, and has a focus on physical and media-access layer techniques that will result in higher efficiency in wireless communications.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at


Microsoft has yet to provide a solution for customers who can’t connect to Microsoft Update to install last week’s out-of-band patch KB 3011780

The causes of the problem remain cloudy, but the symptoms are quite clear. Starting on Nov. 18, some Server 2003, Windows Home Server 2003, and Windows XP SP3 machines suddenly refused to connect to Microsoft Update. As best I can tell, Microsoft has not responded to the problem, not documented a workaround, and is basically doing nothing visible to fix it.

10 (FREE!) Microsoft tools to make admins happier
(Keep in mind that, although Windows XP is no longer supported, Security Essentials updates for XP still go through Microsoft Update, and all old patches for XP are still available — when Microsoft Update is working, anyway.)

The main TechNet thread on the subject says the error looks like this:

The website has encountered a problem and cannot display the page you are trying to view. The options provided below might help you solve the problem.

Error number: 0x80248015

There are also lengthy discussions on the SANS Internet Storm Center site and on the MSFN site.

Some people have reported that simply setting the system clock back a couple of weeks and re-running the update bypasses whatever devils may be lurking. For most, though, that approach doesn’t work.

Alternatives range from deleting the C:\WINDOWS\SoftwareDistribution folder to running wuauclt.exe /detectnow to chasing chicken entrails. Some of the fixes work on some machines, others don’t.

Poster Steve on the SANS ISC thread noted an important detail. On machines that get clobbered, when you look at the files C:\WINDOWS\SoftwareDistribution\ AuthCabs\ and C:\WINDOWS\SoftwareDistribution\ AuthCabs\ you see a suspicious entry:


That just happens to coincide, more or less, with when Microsoft Update started to fail.

I looked at a couple of machines that are still working fine, and the file on them has this entry:


The file has the 2014 <ExpiryDate>.

I have no idea why the file on some machines has the 2014 date, while others have the 2018 date. But this may be the telltale sign differentiating machines that can still connect to Microsoft Update from those that don’t.

Poster b3270791 on the MSFN thread has a solution that seems to work, but it involves replacing the muweb.dll file on the broken machines with an earlier muweb.dll file downloaded from the Internet. While that approach doesn’t exactly exhibit world-class security best practices, it does seem to work.

Does anybody at Microsoft give a hang, with XP already officially out to pasture and Server 2003 due to follow it to the glue farm on July 14, 2015?

Server 2003 admins have been twiddling their thumbs for a week, unable to install that out-of-band patch.

XP users are affected, too, but who cares? Microsoft’s making good on its promise to deliver Security Essentials updates to XP customers. If the customers can’t install them, well, that’s just one of those nasty implementation details, you know.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

Traces of Regin malware may date back to 2006

Written by admin
November 24th, 2014

Regin was known about for some time by the security industry, according to Symantec

Malware that Symantec says was probably developed by a nation state may have been used for as long as eight years, a length of time that underscores the challenges the security industry faces in detecting advanced spying tools.

On Sunday, the computer security company published a 22-page report and blog post on the Regin malware, which it described as a powerful cyberespionage platform that can be customized depending on what type of data is sought.

It was predominantly targeted at telecoms companies, small businesses and private individuals, with different modules customized for stealing particular kinds of information. Symantec found about 100 entities infected with Regin in 10 countries, mostly in Russia and Saudi Arabia, but also in Mexico, Ireland, India, Afghanistan, Iran, Belgium, Austria and Pakistan

A first version of Regin was active between 2008 and 2011. Symantec began analyzing a second version of Regin about a year ago that had been forwarded by one of its customers, said Liam O’Murchu, a Symantec researcher, in a phone interview Sunday.

But there are forensic clues that Regin may have been active as far back as 2006. In fact, Symantec didn’t actually give Regin its name. O’Murchu said Symantec opted to use that name since it had been dubbed that by others in the security field who have known about it for some time.

If Regin does turn out to be 8 years old, the finding would mean that nation states are having tremendous success in avoiding the latest security products, which doesn’t bode well for companies trying to protect their data. Symantec didn’t identify who it thinks may have developed Regin.

Symantec waited almost a year before publicly discussing Regin because it was so difficult to analyze. The malware has five separate stages, each of which is dependent on the previous stage to be decrypted, O’Murchu said. It also uses peer-to-peer communication, which avoids using a centralized command-and-control system to dump stolen data, he said.

It’s also unclear exactly how users become infected with Regin. Symantec figured out how just one computer became infected so far, which was via Yahoo’s Messenger program, O’Murchu said.

It is possible the user fell victim to social engineering, where a person is tricked into clicking on a link sent through Messenger. But O’Murchu said it is more likely that Regin’s controllers knew of a software vulnerability in Messenger itself and could infect the person’s computer without any interaction from the victim.

“The threat is very advanced in everything it does on the computer,” O’Murchu said. “We imagine these attacks have quite advanced methods for getting it installed.”

Telecom companies have been particularly hard hit by Regin. Some of the companies have been infected by Regin in multiple locations in multiple countries, Symantec found.

The attackers appear to have sought login credentials for GSM base stations, which are the first point of contact for a mobile device to route a call or request data. Stealing administrator credentials could have allowed Regin’s masters to change settings on the base station or access certain call data.

Regin’s other targets included the hospitality, airline and ISP industries, as well as government.

“We do not think [Regin] is a criminal type of enterprise,” O’Murchu said. “It’s more along the lines of espionage.”

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at



Where to find security certifications

Written by admin
November 19th, 2014

Some say they are essential to a successful security career. Others argue they are an outdated concept and a waste of time. Despite the debate, here are 10 places to further learn about the security trade and the certifications required for some jobs in the info sec field.

Security certifications
The debate rages on whether gaining security certifications means much. Regardless of whether you think they aren’t even worth the paper they are printed on, there are others who believe certifications prove the individual knows what they are doing. With that, here are a group of vendors who offer security certifications.

According to its site: “We were there for the first internet security incident and we’re still here 25 years later. We’ve expanded our expertise from incident response to a comprehensive, proactive approach to securing networked systems. The CERT Division is part of the Software Engineering Institute, which is based at Carnegie Mellon University. We are the world’s leading trusted authority dedicated to improving the security and resilience of computer systems and networks and are a national asset in the field of cybersecurity.”

Certified Wireless Network Professional
“The CWSP certification is a professional level wireless LAN certification for the CWNP Program. The CWSP certification will advance your career by ensuring you have the skills to successfully secure enterprise Wi-Fi networks from hackers, no matter which brand of Wi-Fi gear your organization deploys.”

CompTIA has four IT certification series that test different knowledge standards, from entry-level to expert. A list of certifcations can be found here.

Global Information Assurance Certification
According to its site: “Global Information Assurance Certification (GIAC) is the leading provider and developer of Cyber Security Certifications. GIAC tests and validates the ability of practitioners in information security, forensics, and software security. GIAC certification holders are recognized as experts in the IT industry and are sought after globally by government, military and industry to protect the cyber environment.”

Information Assurance Certification Review Board
As stated in its charter: “The Board will sponsor a world-class certification set, that meets or exceeds the needs of organizations and individuals wishing to hold candidates to the highest possible level of professional certification in the area of information security. As much as it is feasible, the Board will remain independent from any commercial organization.”

The IACRB currently offers certifications for 7 job-specific responsibilities that reflect the current job-duties of information security professionals.

International Information Systems Security Certification Consortium
According to its Web site: “The (ISC)² CBK is the accepted standard in the industry and continues to be updated to reflect the most current and relevant topics required to practice in the field.”

To find out more about what it offers, go here.

“As an independent, nonprofit, global association, ISACA engages in the development, adoption and use of globally accepted, industry-leading knowledge and practices for information systems,” according to its Web site.

To find out more about the training offered, go here.

McAfee Institute
McAfee’s motto is “Your Place to Learn Real-World Skills & Advance your Crime-Fighting Career!”

Find out more about its certifications here.

Mile2 is a developer and provider of proprietary vendor neutral professional certifications for the cyber security industry. MIle2 has a laundy list of internationally recognized cyber security certifications.

Security University
According to its site: “Since 1999, Security University has led the professional cybersecurity education industry in hands-on information security training & education. Security University provides uniform IT security workforce training with performance based, tactical hands-on security skills that qualify and validate the workforce so less people can do the same job or more, with consistent cybersecurity skills.”

If you have taken other certification courses, please let us know how they went in the comments section.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

6 ways to maximize your IT training budget

Written by admin
November 15th, 2014

Customized, in-house training often zeros in on topics relevant to your business. However, it comes with an equally high price tag. If your employees simply need to fill in knowledge gaps or get up to speed with a specific software package, there are a plethora of affordable, flexible options for even the most limited budgets.

Although the economy is picking up ever so slightly, IT departments remain on the lookout for ways to do more with less – fewer people, fewer resources, less money. That’s why learning how to stretch the training budget as far as possible can pay significant dividends. This is true both for those organizations seeking to develop employee skills and knowledge for the least expenditure, and for employees looking to improve and enhance their career potential and longevity.

If an organization can get its employees to buy into training and career development, they can literally double their dollars when costs get split 50-50. This is already an implicit aspect in many tuition support programs, where employers offer a partial stipend or payment to help cover the costs of academic coursework. Why not make it a part of how IT training dollars get spent, too?

Some IT departments offer their employees a menu of courses or certifications from which employees can choose, coupled with (partial) reimbursement plans to help defray their costs. By offering more support for those credentials it needs the most, and less for those credentials outside the “must-have” list, organizations can steer employees in the directions they want them to go.
Negotiate Discounts to Control Costs

Times are tough for training companies, too. If you do want to buy into online or classroom training, you’ll get a better bang from your budget if you negotiate a “group rate” of sorts to cover some or all of your training needs.

Although online or virtual classes may not be as popular as instructor-led in-class training, remote offerings usually cost less to begin with; obtaining additional discounts will help leverage such spending even further. Some training companies offer subscriptions to their entire training libraries on a per-seat, per-month basis.

Pluralsight offers its extensive training catalog to individuals for about $50 a month, for example, and its business offerings include progress tracking and assessments for enrolled employees, as well as library access for some number of individuals. A 10-user license costs about $25 per month, per individual user for a Basic package, and double that for their Plus package, which adds exercises, assessments and offline viewing to the basic ability to watch courses online on a PC or mobile device.
Purchase Key Items in Bulk

If you know you need to run a team of system engineers or senior tech support staff through a specific curriculum that includes certain certification exams, and you can hold those people to a schedule, then you can purchase exam voucher or training/voucher bundles at a discount. As the purveyor of many popular and high-demand cert exams, and a publisher of copious related training materials, Pearson VUE/Pearson Education offers much of what employers need for such programs. Contact the Voucher Store to inquire about volume purchase pricing and arrangements.

(Note: The author writes on an occasional basis for InformIt, a professional development branch of Pearson, and on a frequent basis for the Pearson IT Certification blog.)
Assemble Employee Study Groups and Resources

Just a little added support for employees involved in training, or preparing for certification, can help organizations realize better results from (and returns on) their training investments. Consider some or all of the following strategies to help employees make the most of their training experience and get the best value for your training dollars

Set up a wiki or online forums/chat rooms on a per-topic or per-exam basis for employees to use and share.
Encourage employees to share their best resources, learning materials, study techniques and so forth with one another. Build compendia of such materials and pointers for ongoing sharing.
Provide access to practice tests, exercises and simulated or virtual labs for hands-on work so employees can check their learning, buttress their weak spots and develop a well-rounded understanding of training materials, exam objectives and coverage.
Identify local subject matter experts to whom training and certification candidates can turn for added information and explanation when the

Because many employees will be interested in these kinds of things, you can find volunteers to help create and maintain these kinds of low-cost but high-value training and prep tools and resources.

Provide Recognition and Rewards to Those Who Succeed

Sure, it would be nice if everyone who earns a certification or masters some new body of knowledge could get a 25 percent raise and/or a promotion as a consequence of completing a program of some kind. In some cases, such rewards may even be required to retain employees who earn coveted credentials such as the Cisco CCIE, (ISC)2 CISSP or the ITIL Master Qualification.

However, even small rewards, such as a $100 gift certificate for a family night out or a gift card to a local department store can communicate your appreciation to those employees who manage to chew, swallow and digest what they must bite off to pursue training and certification. A public pat on the back in the employee newsletter or at a period employee meeting doesn’t hurt, either. Recognition provides added impetus for employees to finish what they start and shows them that you value the time and effort they must expend in pursuing training and certification.
Ask for Ideas and Suggestions, Then Act Upon Them

Beyond the various methods to stretch your training budget outlined here, you can also turn to your target audience to ask how it thinks you can maximize the return on training and certification. You may be surprised by the quality and quantity of resulting feedback. Most employees respond positively to on-the-job opportunities for career and professional development. They, too, understand that the likelihood of continuing support rests on the outcomes of their training and certification efforts. In the end, they know full well that, by helping the organization excel and improve, they too will benefit from improved job and pay prospects.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

3 power-sipping monitors lower energy bills

Written by admin
November 8th, 2014

Can you have a great monitor that also scrimps on electricity — and helps the environment? We test three 27-in. power-saving displays to find out.

While businesses have always been careful about how much they spent on electricity, today’s displays are making it a lot easier to keep bills lower — and the environment safer.

For example, I measured a four-year-old 27-in. Apple Cinema Display as using 101 watts of power. However, the three 27-in. displays that I’ve tested in this roundup of environmentally smart monitors — AOC’s E2752VH, Dell’s UltraSharp 27 UZ2715H and the Philips 271S4LPYEB — use an average of 26.4 watts.

This is also reflected in cost in electricity to use the monitor per year. Assuming the display is used for 10 hours every business day and that power costs 12 cents per kilowatt-hour (the national average), the Apple Cinema display costs $29 for the year while the three reviewed here average an annual cost of $9.32. Doesn’t sound like a lot, but if you’re a business with a couple of hundred employees, it can add up quickly.

In addition, all three of these displays all carry the EPA’s EnergyStar logo and boast wide screens that can display full HD resolution. And according to the EPA’s Greenhouse Gas Equivalencies Calculator, every kilowatt-hour of power saved equals 1.5 lbs. of carbon dioxide that isn’t spewed into the atmosphere.

Interestingly, these displays have different strategies as to how they reduce their power consumption. The Dell screen relies on the computer’s screen saver to signal when it’s time to go to sleep. The AOC adds a built-in timer for determining when it turns the screen off. And the Philips display includes a pair of infrared sensors that continually scan to see if someone is sitting front of the screen. When you get up, it senses that the space in front is empty and shuts the screen down, reducing its power draw.

Of course, saving on electricity doesn’t mean a thing if a display doesn’t excel in its main purpose. To see what these displays have to offer, I used them every day for a couple of months in my office. I took turns with them one-on-one and then viewed them together showing the same material for comparison.

Saving a few kilowatts here and there might not sound like a huge savings. But, if you multiply the savings by the number of monitors in use every day in a company’s offices, it adds up quickly.

If you’re looking for a frugal monitor, AOC’s E2752VH is not only the cheapest display of the three, but is the only one that uses no measurable power when in sleep mode. However, it falls short on creature comforts like a webcam and USB ports.

Like the others reviewed here, the AOC monitor uses a 27-in. IPS panel with a 1920 x 1080 resolution. It features an ultra-fast 2-millisecond (ms) response time, versus 5ms and 8ms for the Philips and Dell displays, respectively.

The all-black casing is broken only by a small blue light in the lower right corner to show it’s turned on. On the right side of the monitor’s front, there are controls for turning it on and off, raising and lowering the volume, and using the on-screen menu. The marking for each switch’s function is embossed in the display’s plastic case; classy, but I found the highlighted white markings on the other two displays easier to read.
Saving power

The AOC monitor has two techniques for saving power when it’s not being used. First, like the Dell display, it can use the computer’s screen saver to trigger its sleep mode. It can also be configured to shut down when the computer is off or goes to sleep. In addition, a timer lets you shut down the screen after a period of inactivity. Unfortunately, the time can be configured in increments of one hour only.

The AOC consumed 27.5 watts when being used, a little more than the Dell display’s power profile. Unlike the others, when the AOC screen goes to sleep, it uses no discernible power, compared to 1.1 watts and 2.1 watts for the Dell and Philips displays, respectively. It took 3.1 seconds to wake up.

Assuming it is used for 10 hours every business day and power costs 12 cents per kilowatt-hour (the national average), the AOC should cost an estimated $8.30 to use per year. That makes it the cheapest of the three to use, if only about $2 a year less than the Philips monitor.
How well it worked

At 215 candelas per square meter, the AOC’s light output was the lowest of the three; to my eyes, it looked visibly dimmer than the Dell monitor. Its color balance appeared accurate with strong blues and reds. Video play was smooth, with no lags or glitches.

In addition to a standard mode, the display has settings for text, Internet, games, movies and sports. For those who want to tweak the output, the monitor has adjustments for brightness, contrast, gamma and color temperature. Unfortunately, temperature settings are restricted to normal, warm, cool and sRGB settings. You can fiddle with the red, blue and green colors, but I preferred using the Philips’s more extensive presets that are based on actual color temperatures.

The monitor also comes with two Windows-only apps. iMenu lets you adjust brightness, contrast and gamma, but lacks the calibration patterns of the Philips display. Interestingly, several of the program’s labels are in Chinese characters, making the app hard to fathom without the manual.

The eSaver app is how you tell the monitor when to go to sleep, based on the status of your PC. For example, I set it to turn off one minute after the computer is shut down or 10 minutes after the computer goes to sleep or the screen saver comes on. Neither of the other monitors reviewed here can match this specificity.

The AOC display makes do with two 2.5-watt speakers; there is no webcam, microphone or USB hub. The speakers are on the bottom edge of the display, so that they sound thin and don’t get nearly as loud as the Dell’s sound system.
Other features

Its assortment of ports (one DVI, one HDMI and one VGA) lacks the Dell’s second HDMI port and the Philips’s DisplayPort input. But the AOC ports are all horizontally oriented, while the other two displays have vertical ports that are more awkward to plug in.

I did appreciate the addition of an analog audio input, which I used to connect to my phone’s output to listen to music while working. The display also has a headphone jack in the back.

The AOC stand was the easiest of the three to set up, because the base snaps into the monitor arm. Like the others, the monitor has standard VESA mounting holes on the back for screwing it into a third-party stand. However, the only way to adjust the stand is to tilt it up to 3 degrees forward or up to 17 degrees back. It can’t go up and down, swivel or rotate.
Bottom line

With a three-year warranty, the AOC is available at prices starting under $200, the least expensive of the three. If you just need a basic monitor that can offer some savings in electric bills, this is a good choice, but its lack of a webcam among other features may limit its usefulness.

The Dell UltraSharp 27 may be the most expensive of the three displays reviewed here, but it delivers the best mix of screen and multimedia accessories.

The gray and black monitor takes up the least desktop space of the three, something to consider if you’re part of a company that is tight on cubicle space. Built around an IPS panel that offers 1920 x 1080 resolution, the Dell uses hardened anti-glare glass.

There are controls up front for turning the display on and off, using the on-screen menu, and turning the volume up or down; there’s also a handy mute button. I was surprised and impressed by a button with a telephone receiver icon that can initiate or answer a phone call over Microsoft’s Lync VoIP system. (To get this to work, you’ll need to link the screen with a PC via a USB cable.)

The Dell comes with a seductive-sounding PowerNap feature, which triggers the display’s sleep mode when the computer’s screen saver comes on. The monitor first dims the screen’s brightness and then shuts itself down. The screen comes back on when the host computer’s screen saver shuts off. In my tests, the screen woke up in 1.5 seconds.

While being used, the Dell UltraSharp 27 consumed 23.6 watts of power, the least amount of the three. This drops to 1.1 watts when in sleep mode, half what the Philips monitor uses in the same mode.

Based on a typical usage scenario (assuming it’s on for 10 hours a day for every business day and in idle mode the rest of the time, and that power costs 12 cents per kilowatt-hour), this adds up to an estimated annual power cost of $9.45, halfway between the higher-cost Philips and less expensive AOC monitors.
How well it worked

The display was able to deliver 246 candelas per square meter of brightness, the brightest of the three reviewed here. Its reds and greens were spot on, but the screen’s blues appeared slightly washed out. The screen was able to render smooth video; however, its video response time of 8ms is the slowest of the three.

The Dell monitor’s on-screen menu has controls for tweaking brightness, contrast and sharpness as well as adjusting the display’s gamma settings for using a Windows PC or a Mac. To do any meaningful customization, though, you’ll need to load the included Display Manager software. This application, which only works with Windows PCs, includes the ability to change the display’s color temperature as well as choose among Standard, Multimedia, Gaming or Movie modes.

The Dell is a fine all-around monitor; it excels at delivering all the audio-visual accessories that a modern desktop requires. These include an HD webcam for video conferences as well as a dual-microphone array that does a good job of capturing your voice while reducing noise.

The display’s pair of speakers sounded surprisingly good and can actually get too loud for an office. There is a headphone jack on the side, but the Dell lacks the AOC’s audio-in jack.
Other features

The Dell display has the best assortment of ports as well, including one VGA, one DisplayPort and two HDMI ports, both of which can work with an MHL adapter and a compatible phone or tablet. The ports are oriented vertically rather than the more convenient horizontal orientation of the AOC display.

Unlike the others, the display has two USB 2.0 ports and a single USB 3.0 port. All of its cables can be routed through a hole in the back of the monitor’s stand to keep them tidy. On the other hand, the stand’s adjustability is limited: It tilts forward by 5 degrees and back by 22 degrees, but it can’t go up and down, rotate or swivel.

As is the case with the AOC and Philips displays, you can remove the display from the stand and use its VESA mounting holes for use with a third-party stand or mounting hardware. You just need to press a spring-loaded button to release the panel from the stand.
Bottom line

The Dell UltraSharp 27 includes a three-year warranty and it has a list price of $450, considerably more than the AOC’s cost. (Note that the cost of the Dell changed several times while this review was being written and edited.) But given that, the Dell provides all the accoutrements needed for doing desktop work without wasting power.

As minimalist as a monitor gets these days, the Philips 271S4LPYEB is not only power-aware but knows when you’re is sitting in front of it and can automatically go to sleep when you’re not. Too bad it lacks creature comforts like a webcam, speakers or even an HDMI port.

The all-black display houses a 1920 x 1080 IPS panel that is rated at 5ms response time.

Perhaps the most interesting feature is pair of infrared sensors that perceive whether someone is sitting in front of the screen. Called PowerSensor, the system can be set to four different distances (user to screen) between 12 in. and 40 in. It’s quite an impressive trick. One minute after the space in front of the display is vacated, the image dims; two minutes later, the screen goes black. Then, like magic, the screen lights back up when you sit in front of it. When I tried it, the screen came back to life in less than a second.

After some fiddling (to figure out which distance setting was best for me), I found it worked well and quickly responded to my absence and return. I was able to fool it, though, by leaving my desk chair with its back to the screen.

The Philips used 28.8 watts of power in Office mode, which was similar to the standard modes of the other two displays (however, power use varied only slightly with the other modes). When the PowerSensor kicked in, the power demand was initially reduced to 10.6 watts for one minute and then to 2.1 watts.

Ironically, though, the Philips turned out to be the highest power user of the three — probably because of the overhead required to keep the PowerSensor active and ready to restart the display. All told, using my assumptions that it was used for 10 hours a day for every business day and that electricity costs 12 cents per kilowatt-hour, the display had an estimated annual operating expenses of $10.20.

In the front, the Philips monitor has a control for fine-tuning the PowerSensor along with others for turning the display on and off and working with the screen’s menu. A large bluish-green LED shows that the display is turned on. There are also buttons for adjusting the brightness level and selecting the company’s SmartImage feature.

SmartImage optimizes the display’s contrast to suit what you’re looking at. It has preset modes for Office, Photo, Movie, Game or Economy (which reduces its brightness by two-thirds). There’s also an adjustment for the screen’s color temperature with six settings available between 5,000K and 11,500K.

With the ability to deliver 221 candelas per square meter, the Philips monitor delivered rich blues and sharp yellows, but the display’s greens were too light and its reds appeared dull. Its ability to show video was very good — clear and smooth with no frame drops.

Loading the included Smart Control Premiere app (Windows PCs only) provides a deeper level of customization. It has the ability to change the screen’s black level and adjust the gamma settings. A big bonus is that it has a series of test images that you can use to calibrate the display.
Other features

While the AOC and Dell monitors have built-in speakers, the Philips lacks speakers, webcam, microphone and USB ports. In other words, it is a display and nothing more — rather unusual in today’s market.

Its collection of input ports are oriented vertically rather than the AOC display’s more convenient horizontal ports. The Philips has one DisplayPort, one DVI and one VGA port, but no HDMI port. As a result, I used its DVI input with an HDMI adapter.

The Philips does offer the best stand of the trio. With little effort, the display can be tilted forward 5 degrees and back by up to 20 degrees; it can also be raised or lowered by 6.3 in. and swiveled to the right or left by up to 140 degrees.

The entire display can also be easily rotated from landscape to portrait. This is useful if you want to work with a long document or a vertically oriented website without continually scrolling. The monitor’s software reorients the image after the screen is rotated.

After pressing a button in the back, you can remove the display from the stand, revealing its VESA mounting holes. This allows it to be used with a third-party stand or mounting hardware.
Bottom line

The Philips display comes with a three-year warranty and starts at a retail price of about $260, between the cheaper AOC monitor and the better equipped Dell display. While I love the display’s ability to sense when I’m working and when I’m someplace else — and the well-constructed stand — the Philips really needs some further refinement and power reduction before it’s ready for my office.

After using each of these three monitors for several weeks, I would love an amalgam of the three that is built around the Philips adaptable stand, the AOC’s power-saving abilities and the Dell’s bright screen.

That said, the PowerSensor feature on the Philips 271S4LPYEB is impressive and works well, but it uses too much electricity to be of much use.

I love that the AOC E2752VH doesn’t use a watt when it’s asleep. At $240, it is also the cheapest to get and use, but that’s not enough compensation for having the least bright monitor of the three.

The Dell UltraSharp 27 UZ2715H may not have the fastest display panel, but it is fine for business work and is the best equipped and brightest of the three — and uses a reasonable amount of power. I wish that the stand were more adaptable, but no other screen here does so much.

To see how these 27-in. monitors compare, I set each up in my office for at least a week as my primary display. I used each of them to write emails, edit text, create spreadsheets, watch videos, nose around on the Web and work with interactive online programs.

After unpacking and putting each together, I spent some time measuring and investigating how each stand can tilt, raise or rotate the screen. Then I looked over the display’s ports, speakers, microphone and webcam. I looked at the monitor’s controls and tried out the device’s features.

Then I connected each of the monitors to an iPad Mini (with an HDMI adapter), a Toshiba Radius P-55W notebook and a Nexus 7 phone (connecting via a Chromecast receiver). Each screen was able to work with each source; since the Philips display lacks an HDMI port, I used its DVI port with an HDMI-to-DVI adapter.

I next measured each screen’s brightness with a Minolta LM-1 light meter using a white image in a darkened room. After measuring the light level at nine locations, I averaged them and converted the result to candelas per square meter. I then displayed a standard set of color bars and compared the three displays using an Orei HD104 four-way video distribution amplifier and a Toshiba Radius computer as the source.

To see how these monitors save power, I looked into their power conservation settings and software. I checked out how flexible the setting was for putting the display to sleep and measured how much electricity each monitor used with a Kill a Watt power meter.

Using the average U.S. price of 12 cents per kilowatt-hour of electricity, I estimated of how much it might cost to operate each monitor, based on the assumption that it was used for 10 hours a day over the work year (250 days) and was asleep for the rest of the time.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at


Although degrees and IT certifications can be great eye candy for a resume, experience is king. As you may have encountered, a lack of experience can be a major roadblock to getting interest from employers in your early years.

Though you might have the Network+ or CCNA cert, for instance, have you actually configured or played around with a network? Even if you already have held a network technician or administrator position, you might not have experience with all aspects of networking yet. Fortunately there are ways to get hands-on network administration experience, even at home — and most don’t cost anything.

In this story we discuss nine self-taught labs on various networking topics, where I explain the basics and how to get started. I begin with easier, newbie-level projects and progress to more complex ones requiring more thought and time. Some of the tasks take just a few minutes, while others are suitable for a weekend project. You may want to invest in some networking gear to teach yourself the basics, but there are ways around this.

Project 1: Configure TCP/IP Settings

One of the most basic network admin tasks is configuring the TCP/IP settings. If a network isn’t using the Dynamic Host Configuration Protocol (DHCP) that automatically hands out IP addresses after clients connect, you’ll have to manually set static IP and DNS addresses for each client. You may also be required to temporarily set static IP information when doing the initial setup and configuration of routers or other network components.

To set static IP details you must know the IP address of the router and the IP address range in which you can configure the client. You can figure this out from the settings of a computer already successfully connected to the network.

You’ll need the IP address as well as the Subnet Mask, the router’s IP address (a.k.a. the Default Gateway) and the Domain Name System (DNS) Server addresses.

In Windows: Open the Network Connections via the Control Panel or Network and Sharing Center. Next, open the connection that is already on the network and click the Details button.
In Mac OS X: In System Preferences, click the Network icon, then select the connection that is already on the network, such as AirPort (wireless) or Ethernet (wired). With a wired connection you’ll likely see the info you need on the first screen; for a wireless connection, additionally click the Advanced button and look under the TCP/IP and DNS tabs.

Write the numbers down or copy and paste them into a text file, and then close the window.
What Readers Like

ios 8 problems
iOS 8 problems not so magical: Slow, Laggy, Bloaty, Crashy, Buggy, Drainy and… scarlett johansson naked selfie 2 Naked celebs: Hackers download sext selfies from iCloud #thefappening
iphone 6 size comparison I thought the iPhone 6+ was too big; I was wrong subnet calculator A subnet calculator shows the acceptable IP address range for a network.

To see the acceptable IP address range for the network, you can input the IP address and Subnet Mask into a subnet calculator. For example, inputting the IP of and Subnet Mask of shows the range of to

Even though you now know the IP address range, remember that each device must have a unique IP. It’s best to check which IP addresses are taken by logging into the router, but you could also take an educated guess or simply choose a random address within the range. If the address is already taken by another device, Windows or OS X will likely alert you of an IP conflict and you can choose another. Once the IP address is set, write it down or save it in a document; a best practice is to keep a log of all the static IPs along with the serial numbers of the computers that use them.

TCP/IP settings Manually setting the computer’s IP address.

Now, to set a static IP address:
In Windows: Open the Network Connection Status window, click the Properties button and open the Internet Protocol Version 4 (TCP/IPv4) settings. Choose “Use the following IP address” and enter the settings: an IP address that’s in the acceptable range, plus the Subnet Mask, Default Gateway and DNS Server from the Network Connection Details window.
In Mac OS X: Open the Network window and click the Advanced button. On the TCP/IP tab, click the drop-down next to Configure IPv4, choose Manually and enter an IP address that’s in the acceptable range, plus the Subnet Mask and router address you copied earlier. Go to the DNS tab and enter the DNS Server address you copied before.

As a network admin, you’ll likely help set up, troubleshoot and maintain the wireless portion of the network. One of the most basic tools you should have is a Wi-Fi stumbler. These tools scan the airwaves and list the basic details about nearby wireless routers and access points (APs), including the service set identifier (SSID), also known as the network name; the MAC address of the router/AP; the channel; the signal level; and the security status.

You can use a Wi-Fi stumbler to check out the airwaves at home or at work. For instance, you can check which channels are being used by any neighboring wireless networks so you can set yours to a clear channel. You can also double-check to ensure all the routers or access points are secured using at least WPA or WPA2 security.
NetSurveyor The NetSurveyor stumbler gives a text-based readout and visual charts of wireless channel usage and signals.

Vistumbler and NetSurveyor (for Windows), KisMAC (for OS X) and and Kismet (for both plus Linux) are a few free options that give both text-based readouts and visual charts of the channel usage and signals. Check out my previous review of these and others.
Wifi Analyzer The Wifi Analyzer app provides a nice visualization for channel usage.

If you have an Android phone or tablet, consider installing a Wi-Fi stumbler app on it for a quick, less detailed look at the Wi-Fi signals. Wifi Analyzer and Meraki WiFi Stumbler are two free options. See my previous review of these and others.
Project 3: Play with a wireless router or AP

To get some experience with setting up and configuring wireless networks, play around with your wireless router at home. Or better yet, get your hands on a business-class AP: See if you can borrow one from your IT department, check eBay for used gear or consider buying new equipment from lower-cost vendors such as Ubiquiti Networks, where APs start at around $70.

To access a wireless router’s configuration interface, enter its IP address into a web browser. As you’ll remember from Project 1, the router’s address is the same as the Default Gateway address that Windows lists in the Details window for your wireless network connection.

Accessing an AP’s configuration interface varies. If there’s a wireless controller, it’s the one interface you’ll need to configure all the APs; with controller-less systems you’d have to access each AP individually via its IP address.

Once you’ve accessed the configuration interface of your router or AP, take a look at all the settings and try to understand each one. Consider enabling wireless (or layer 2) isolation if supported and see how it blocks user-to-user traffic. Perhaps change the IP address of the router/AP in the LAN settings and/or for routers, disable DHCP and statically assign each computer/device an IP address. Also consider setting a static DNS address (like from OpenDNS) in the WAN settings. You might also look into the Quality of Service (QoS) settings to prioritize the traffic. When you’re done experimenting, make sure it’s set to the strongest security — WPA2.
typical AP interface I’ve statically assigned this AP an IP address and DNS servers.

If you can’t get your hands on a business-class AP, consider playing around with interface emulators or demos offered by some vendors, as Cisco does with its small business line.

Project 4: Install DD-WRT on a wireless router
For more experimentation with wireless networking, check out the open-source DD-WRT firmware for wireless routers. For compatible routers, DD-WRT provides many advanced features and customization seen only in business- or enterprise-class routers and APs.

For instance, it supports virtual LANs and multiple SSIDs so you can segment a network into multiple virtual networks. It offers a VPN client and server for remote access or even site-to-site connections. Plus it provides customizable firewall, startup and shutdown scripts and supports a few different hotspot solutions.
DD-WRT DD-WRT loads a whole new feature set and interface onto the router.

For more on DD-WRT and help on installing it on your router, see “Teach your router new tricks with DD-WRT.”
Project 5: Analyze your network and Internet traffic

As a network admin or engineer you’ll likely have to troubleshoot issues that require looking at the actual packets passing through the network. Though network protocol analyzers can cost up to thousands of dollars, Wireshark is a free open-source option that works on pretty much any OS. It’s feature-rich, with support for live and offline analysis of hundreds of network protocols, decryption for many encryption types, powerful display filters, and the ability to read/write via many different capture file formats.

Wireshark Wireshark capturing network packets.

Once you get Wireshark installed, start capturing packets and see what you get. In other words, browse around the Web or navigate network shares to see the traffic fly. Keep in mind you can stop the live capturing to take a closer look. Although Wireshark can capture all the visible traffic passing through the network, you may see only the traffic to and from the client whose packets you’re capturing packets if the “promiscuous” mode isn’t supported by your OS and/or the network adapter. (For more information, see the Wireshark website.)

Note: Even though packet capturing is usually only a passive activity that doesn’t probe or disturb the network, some consider monitoring other people’s traffic a privacy or policy violation. So you don’t get into trouble, you ought to perform packet capturing only on your personal network at home — or request permission from management or the CTO before doing it on your work network. In fact, you should clear it with management before doing any monitoring or analysis of a company or school network.

There are other free network analyzers you might want to experiment with. For instance, the EffeTech HTTP Sniffer can reassemble captured HTTP packets and display a Web page, which can visually show you or others what’s captured rather than looking at the raw data packets. Password Sniffer “listens” just for passwords on your network and lists them, which shows just how insecure clear-text passwords are. And for mobile analysis via a rooted Android phone or tablet, there are free network analyzers like Shark for Root.

Project 6: Play with network emulators or simulators
Though you might not be able to get your hands on enterprise-level network gear for practicing, you can use emulators or simulators to virtually build and configure networks. They can be invaluable tools for preparing for IT certifications, including those from Cisco and Juniper. Once you create virtual network components and clients you can then configure and administer them with emulated commands and settings. You can even run network analyzers like Wireshark on some, to see the traffic passing through the network.

Here are a few of the many emulators and simulators:
The GNS3 Graphical Network Simulator is a popular free and open source choice. It requires you to supply the OS, such as Cisco IOS or Juniper’s Junos OS, which usually requires a subscription or support contract from the particular vendor, but you may be able to get access via the IT department at work or school.

GNS3 interface GNS3 Graphical Network Simulator supports Cisco IOS/IPS/PIX/ASA and Juniper JunOS.

Netkit is another free and open source option. It doesn’t include vendor-specific functionality and is limited to generic networking components, but it also doesn’t require you to have the OSes as GNS3 does.

The Boson NetSim Network Simulator is a commercial offering with pricing starting at $99; its intent is to teach Cisco’s IOS. It offers a free demo download, but that functionality is greatly limited.

There are also websites, such as SharonTools and Open Network Laboratory, that offer remote admin access to network components and Web-based emulators for you to practice with commands. Network World has a nice roundup of free Cisco simulators and emulators.

Project 7: Perform penetration testing on your own network
You can read and read about network security, but one of the best ways to learn about or to verify security is by penetration testing. I don’t mean you should snoop on your neighbors or hack a business; try it your own network so you don’t end up in the slammer.

Perhaps find a network vulnerability that interests you, research how to take advantage of it and, once you’ve done the hack, make sure you understand how it was possible. And always ensure your network and those you administer are protected from the vulnerability.

Here are a few hacks you could try:
Crack Wi-Fi encryption — WEP is the easiest — with Aircrack-ng. Crack a Wi-Fi Protected Setup (WPS) registrar PIN with Reaver-WPS to gain access to a wireless router.
Hijack online accounts via Wi-Fi using the Firefox add-on Firesheep or the Android app DroidSheep.
Capture and crack 802.1X credentials using FreeRadius-WPE.

When researching, you’ll likely find how-to tutorials on exactly how to do the hacks and what tools you need. One popular tool that’s filled with hundreds of penetration testing tools is the BackTrack live CD, but the project is currently not maintained. However, Kali Linux is a similar tool that has emerged; it can be installed on a computer or virtual machine or run via live CD or USB.

If you find you like penetration testing, perhaps look into becoming an Ethical Hacker. Project 8: Set up a RADIUS server for enterprise Wi-Fi security

At home you likely encrypt your wireless router with the Personal or Pre-shared Key (PSK) mode of WPA or WPA2 security to keep others off the network and to prevent them from snooping on your traffic. The Personal mode is the simplest way to encrypt your Wi-Fi: Set a password on the router and simply enter it on the devices and computers you connect.

Businesses, however, should use the Enterprise mode of WPA or WPA2 that incorporates 802.1X authentication. This is much more complex than the Personal mode but provides better protection. Instead of a global Wi-Fi password, each user receives his or her own login credentials; the encryption protects against user-to-user snooping. Plus you can change or revoke individual login credentials to protect the network when an employee leaves or a device becomes or lost or stolen.

To use the enterprise mode you must have a separate Remote Authentication Dial-In User Service (RADIUS) server to handle the 802.1X authentication of users. As a network admin you’ll likely have to configure and troubleshoot clients with 802.1X authentication and help maintain the RADIUS server. For practice, consider setting up your own server and using enterprise-level Wi-Fi security on your home network.

If you’re working on a network that has a Windows Server, the Network Policy Server (NPS) or Internet Authentication Service (IAS) component can be used for the RADIUS server. But if not, you have a couple of free options. If you want some Linux experience, consider the open-source FreeRADIUS. Some easier-to-use options that include a Windows GUI are the freeware TekRADIUS and the 30-day free trials of commercial products like ClearBox. In a previous review, I evaluated these and other low-cost RADIUS servers.

Once you have the RADIUS server installed, create user accounts and input shared secrets (passwords) for the APs. Then configure the wireless router or APs with WPA/WPA2-Enterprise: Enter the RADIUS server’s IP and port, and the shared secret you defined on the RADIUS server. Then you can connect clients by entering the login credentials you defined on the RADIUS server.

Here are a few previous articles you may want to check out: 6 secrets to a successful 802.1X rollout, and Tips for troubleshooting 802.1X connections, and Lock Down Your Wi-Fi Network: 8 Tips for Small Businesses.
Project 9: Install Windows Server and set up a domain

As a network admin you’ll likely manage Microsoft-based networks running Windows Server. To gain more experience, consider running Windows Server at home.

Although purchasing a copy of a server edition isn’t feasible just for tinkering around with, there are some free options. Microsoft provides 180-day free trials via a downloadable ISO for installing on a physical machine, virtual hard drive (VHD) for running on a virtual machine, and access to a pre-configured virtual machine on the Windows Azure cloud. Plus the company offers Virtual Labs — guided tutorials in a virtual environment — that you might want to check out.

Once you get access to a server, discover and experiment. Perhaps configure Active Directory and play with Group Policies, set up Exchange and configure an Outlook client, or set up NPS for 802.1X authentication.
Next steps

If you found these projects useful in learning and getting experience, keep in mind there are many more self-taught labs out there online. Try searching for labs on the specific certifications you’re interested in or the network vendors you’d like to administer.

Though virtual environments and emulators provide a quick and easy way to get hands-on experience, also try to get as much time as you can with the real gear. Ask the IT department if you can borrow any spare equipment, and take advantage of any other chances you spot to get real-world experience.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at



The devices connected to your router battle for bandwidth like thirst-crazed beasts jostling for access to a receding watering hole. You can’t see the melee, but you can feel its impact. Without intervention, the strongest competitors—a BitTorrent download, for instance—will drink their fill, even if it’s not essential to their survival, while others—a VoIP call, a Netflix stream, or a YouTube video—are left to wither and die.

Data integration is often underestimated and poorly implemented, taking time and resources. Yet it

A router with good Quality of Service (QoS) technology can prevent such unequal distribution of a precious resource. You can dip only one straw into the Internet at a time, after all. QoS ensures that each client gets its chance for a sip, and it also takes each client’s specific needs into account. BitTorrent? Cool your jets. If one of your packets is dropped, it’ll be resent. You can run in the background. Netflix, VoIP, YouTube? Lag results in a bad user experience. Your data gets priority.

That’s a gross oversimplification, of course. Here’s a more in-depth explanation. QoS, also known as traffic shaping, assigns priority to each device and service operating on your network and controls the amount of bandwidth each is allowed to consume based on its mission. A file transfer, such as the aforementioned BitTorrent, is a fault-tolerant process. The client and the server exchange data to verify that all the bits are delivered. If any are lost in transit, they’ll be resent until the entire package has been delivered.

That can’t happen with a video or audio stream, a VoIP call, or an online gaming session. The client can’t ask the server to resend lost bits, because any interruption in the stream results in a glitch (or lag, in terms of game play). QoS recognizes the various types of traffic moving over your network and prioritizes it accordingly. File transfers will take longer while you’re watching a video or playing a game, but you’ll be assured of a good user experience.

Traditional QoS

Different routers take different approaches to QoS. With some models, you simply identify the type of traffic you want to manage and then assign it a priority: High, medium, or low. With others, you can choose specific applications, or even identify the specific ports a service or application uses to reach the Internet. Yet another way is to assign priority to a specific device using its IP or MAC address.

Router Quality of Service QoS

Many older routers, such as this Netgear WNR2000 802.11n model, have predefined Quality of Service for a limited number of applications, but you must configure your own rules for anything the manufacturer didn’t think of.

Configuring QoS this way can be very cumbersome, requiring lots of knowledge of protocols, specific details about how your router operates, and networking in general. Some routers, for instance, depend on you to inform them of the maximum upload and download speeds your ISP supports. Enter the incorrect values, and your network might perform worse instead of better.

Fortunately, router manufacturers have made great strides in making QoS easier to configure. In some cases, it’s become entirely automatic.

Intelligent QoS

Some routers include the option of automated QoS handling. Most newer models support the Wi-Fi Multimedia (WMM) standard, for instance. WMM prioritizes network traffic in four categories, from highest to lowest: Voice, video, best effort (most traffic from apps other than voice and video), and background (print jobs, file downloads, and other traffic not sensitive to latency). WMM is good as far as it goes, but it ameliorates only wireless network contention. It does nothing to resolve the battle for bandwidth among wired network clients.

Better routers go further to cover both sides of the network. They automatically choose which traffic gets priority based upon assumptions—putting video and voice ahead of file downloads, for instance. The intelligence behind each vendor’s QoS functionality, however, varies according to the quality of the algorithm in use and the processor power available to run it.

Router Quality of Service QoS

Qualcomm’s StreamBoost technolog enables the the D-Link DGL-5500 to display exactly what’s consuming the majority of your network’s bandwidth.

Right now, Qualcomm’s StreamBoost traffic-shaping technology seems to be the hot QoS ticket. StreamBoost, first announced in January, 2013, is based on technology originally developed by Bigfoot Networks. Bigfoot, a company that Qualcomm acquired in 2011, designed network-interface cards targeted at gamers, who are among the most latency-sensitive computer users in the world.

Qualcomm doesn’t manufacture routers, but the company does design and manufacture processors that go into high-end consumer routers such as Netgear’s Nighthawk X4 and D-Link’s DGL-5500 Gaming Router. While there’s no technological barrier to running StreamBoost on a Marvel or Broadcom processor, Qualcomm currently doesn’t license the firmware separate from its chips.

StreamBoost can distinguish between and prioritize latency-sensitive traffic (audio, video, gaming, and so on) over latency-insensitive traffic (downloads, file transfers, etc.), and it can adjust its allocation of bandwidth to various network activities to ensure all clients get a good experience. If several clients are streaming Netflix videos at the same time, for instance, it can automatically reduce one or more of those streams from 1080p quality to 720p quality to ensure all the sessions have enough bandwidth.

What’s more, StreamBoost can distinguish among the types of client devices and reduce the image quality streaming to a smartphone or tablet, because the degradation won’t be as noticeable on those small screens as it would be on a big-screen smart TV.

Router Quality of Service QoS

StreamBoost lets you assign priorities to client PCs, so you can preserve bandwidth for a smart TV at the expense of a PC used for BitTorrent downloads, for instance.

StreamBoost’s bandwidth graphs and tools provide better visibility and more precise tuning than other QoS tools I’ve seen. And if you opt in to participate, you’ll receive ongoing updates from Qualcomm’s database in the cloud so that your router can continually optimize its performance and learn how to handle new devices that come on the market. StreamBoost support alone won’t make a crappy router great, but it can make a difference.

Don’t stop with QoS

Good Quality of Service is essential if you use your network to stream video, play online games, make VoIP and Skype calls, or watch YouTube (and if you don’t do any of those things, you wouldn’t have clicked on this story in the first place). The performance benefits you’ll realize might even save you from moving up to a pricier service tier with your ISP.

Linksys WRT1900AC Wi-Fi router
An 802.11ac router can deliver higher performance even with clients that are equipped with 802.11n adapters.

But there are other things you can do beyond traffic shaping. Perform a site survey using a tool such as Kismet to see which radio channels your neighbors are relying on, and configure your router to use something else. There are only three non-overlapping channels in the 2.4GHz frequency band: 1, 6, and 11. Use one of these if possible.

If you have a dual-band router that supports both the 2.4- and 5GHz frequency bands, use the less-crowded higher frequency for latency-sensitive traffic such as media streaming, and reserve 2.4GHz for things like downloads. There are many more non-overlapping channels at 5GHz, and the higher channels—150 and up—support more bandwidth than the lower channels.

Lastly, if you’re using an 802.11n (or older) router, consider moving up to a model based on the newer 802.11ac standard. Even if your clients are stuck with 802.11n adapters, you’ll still see a significant performance boost with an 802.11ac router.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at



Google is on track to spend more money this year attempting to influence lawmakers than any other tech company

Google and Facebook continued to pour millions of dollars into political lobbying in the third quarter in attempts to influence U.S. lawmakers and have legislation written in their favor.

Google spent $3.94 million between July and September while Facebook spent $2.45 million, according to disclosure data published Tuesday.

The only tech-related company to outspend Google was Comcast, which is trying to persuade politicians to look favorably on a merger with Time Warner and spent $4.23 million during the quarter.

But Google stands as the largest spender in the entire tech industry to date this year. It has run up a $13 million bill lobbying Washington politicians and their offices on a range of issues as diverse as online regulation of advertising, cybersecurity, patent abuse, health IT, international tax reform, wind power and drones.

If industry spending continues at its current level, 2014 will mark the fourth year that Google has spent more money on federal lobbying than any other technology company.

Facebook began lobbying Washington in 2009 and has quickly risen to become the fourth-largest spender in the tech industry so far this year, behind Google, Comcast and AT&T.

The company’s lobbying hits an equally diverse range of areas including cyber breaches, online privacy, free trade agreements, immigration reform, Department of Defense spending and intellectual property issues.

Another notable spender in the third quarter was Amazon, which plowed $1.18 million into its lobbying efforts. That represents a quarterly record for the Seattle company and is the second quarter in a row that it has spent more than $1 million on lobbying.

Amazon’s lobbying was aimed at many of the same areas targeted by Google and Facebook, but covered additional subjects close to its business, including postal reform, online wine sales, mobile payments and Internet tax payments.

The money is funneled to D.C. lobbying firms that use it to push their clients’ agendas to politicians and their staffers. The lobbying disclosure reports are published quarterly by the U.S. Senate and detail spending in general areas, but do not go into specifics.

Lobbying has long been an effective tool used by major companies, but it’s only been in the last few years that Internet companies have started spending money in amounts to rival traditional tech giants.

During the third quarter, other major spenders included Verizon ($2.91 million), CTIA ($1.95 million), Microsoft ($1.66 million) and Oracle ($1.2 million).

Apple spent just over $1 million in the quarter lobbying on issues including consumer health legislation, transportation of lithium ion batteries, international taxes, e-books, medical devices and copyright.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

10 Tips to Ensure Your IT Career Longevity

Written by admin
October 19th, 2014

Enjoying a long career doesn’t happen by accident. It takes planning and effort. Use these tips to get your head in the game and keep your eye on the future.

Many people say that IT and technology are a young man’s game, and if you look at most influential tech companies you might agree. Most IT workers employed at those companies there are under 35 and male. However, these big name firms employ only a fraction of tech professionals and there are plenty of opportunities out there for everyone. IT has one of the lowest unemployment rates of any industry because in most organizations technology touches every part of the business.

Be Responsible for Your Own Career
Achieving career longevity in the IT business takes time effort, time and resources — and nobody but you can organize, facilitate and be responsible for all of it. To stay ahead of the learning curve you need to think about your goals and architect your future.

Many organizations are getting better at providing embedded employee performance and career management processes, according to Karen Blackie, CIO of Enterprise Systems & Data for GE Capital. However, she warns that you are your own best advocate and should always strive to “own” your career. Don’t wait for your organization to do it for you because that day may never come.

This means stepping back and thinking about where you want to be in X amount of time and then outlining the different skills and experience needed to get there. With that information you can start mapping out your career. “Doing research into what interests you, setting goals and objectives and then having a plan around how you will accomplish those goals is very important,” says Blackie. Remember positions get eliminated and things don’t always work out so it’s wise to consider alternate paths.

Flexibility and Agility Required
Technology moves at an unprecedented pace, which means you’ve got to be flexible. “Adaptability is key. CIOs who can’t adapt to that change will see themselves – unfortunately – left behind in a competitive job market. But the CIOs who see each new change – whether mobile, BYOD, Cloud, IoT – as an opportunity are the technology executives who will continue to be in demand– because they’ve proven that they can leverage new solutions to drive business value, “says J.M. Auron, IT executive resume writer and president of Quantum Tech Resumes.
Learn About the Business

“Having the business knowledge is a key foundational element to one’s career, ” says GE Capital’s Blackie. Being a great developer isn’t enough if you plan to climb the corporate ladder. You’ve got to understand your industry and how your company does business. This kind of data can also help you be a better programmer. By better understanding the business needs it will help you deliver products, software and services that better align with the business.

Always Be Learning
The price of career longevity in the world of IT and technology is constant learning. If you aren’t passionate about it or you’re complacent, it’s easy to find yourself locked into outdated technology and left behind. There are many ways to stay current like a formal college environment or a certification course for example. “It is your career and it is up to you to keep educating yourself,” says Robert P. Hewes, Ph.D., senior partner with Camden Consulting Group, with oversight for leadership development and management training.

Professional organizations, conferences, developer boot camps and meet-ups are all great ways to stay abreast in the newest technologies and build network connections within your industry. “It’s often a place where you develop life-long friends and colleagues, “says Blackie.

Attend Industry Conferences

Industry conferences are great way to learn about the newest trends in technology as well as network with like-minded people who hold similar interests. Be selective about which conferences you attend and make sure you allot the necessary time to socialize and network with your peers.

“One mistake attendees often make at conferences is filling their schedule so tightly with panels that they miss out on the networking available during downtime. It’s important to attend mixers and informal gatherings at conferences to meet your peers and build relationships that could last throughout your career,” says Blackie.

Incorporate Time into Your Day for Reading
Set up a little time each day to stay current with the goings-on in your part of technology and beyond. “Become a regular reader of info in your industry, be it an industry journal or an online blog/magazine. There is a lot of information out there. Another quick way to find relevant information is via an aggregator, Pocket and LinkedIn do this,” says Hewes.

Google News and a host of other news aggregators like LinkedIn Pulse or Reddit offer a daily stream of news and with alerts and notifications that allow users to focus on key areas of interest.

Pay Attention to Competitors
“It’s important to get to know industry competitors and watch what they’re doing. You can learn a lot from the successes and failures of your competitors,” says Blackie. Being first isn’t always required to be successful. Doing it better than the next guy is, however. Find your competitors as well as organizations that you think are thought leaders in your industry and follow them in social media or create a Google Alert for them.

Find a Mentor or Coach

Mentoring is useful at all levels of one’s career. A mentor can help you negotiate internal politics or provide insight into how to solve lingering problems. You may also have different mentors throughout your career, each offering a different perspective or expertise.

Understand the Value of Social Media
Not everyone adores social media, but it’s a necessary element in the race to separate you from the rest of IT professionals. Build and maintain profiles on relevant social media sites and then use them to explain the value proposition you offer.

Work on Soft Skills and Some Not-so-Soft Ones

Branding Skills

Branding is what help separates you from the rest of the pack and explains what your value proposition is to your employer or prospective employers. “Branding is another key for career advancement – and one that few technology leaders have fully embraced. Giving thought to that brand is key for career longevity and advancement,” Auron says.

Communication Skills

According to Auron, the ability to find the right path, communicate value and build enthusiasm is a crucial step in transforming the perception of IT from that of a cost center to that of a business enabler. “The most critical skill is the ability communicates the real value of technology investment to nontechnical leadership. Some technologists can fall into one of two traps: giving so much detail that the audience’s eyes glaze over or, appearing patronizing when intelligent – but nontechnical leaders – don’t get a specific reference,” Auron says.

Project Management Skills

At some point in your technology career you will be asked to lead a project. When the time comes make sure you’ve got the necessary tools. “It is critical if you are headed onto the management track. In fact, you should try to gain wide experience with all kinds of projects,” says Hewes.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

8 headline-making POS data breaches

Written by admin
October 14th, 2014

The rash of data breaches in the US through POS terminals has many looking the Chip and PIN model used in Europe.

POS swipes
Next month marks an unceremonious anniversary of the 1960s-vintage “swipe-and-signature” magnetic stripe card system. To some the end of this point-of-sale terminal is not a moment too soon. While talks continue on implementing the “chip and PIN” system, we look back at the most recent incidents involving stealing information from these devices.

Home Depot
Data thieves were the ones following The Home Depot’s credo of “More doing,” as they did more doing with other people’s money. The Home Depot reported a data breach earlier in September. The home improvement store confirmed that 2,200 stores were compromised in U.S and Canada. The number of credit cards affected may have reached 56 million.

Around 70 million customers found coal in their stockings when Target was targeted by thieves just before this past Christmas.

The arts and craft chain reported more than 3 million credit cards affected by identity theft. This was a big one-two punch with Target’s breach occuring just a month earlier.

Beef O’Brady
Customers have a beef with the Beef O’Brady restaurants when in early September they were the victim of a data breach to the Florida chain’s point of sale system. There have been no reports yet how many cards were affected by the breach.

Dairy Queen
Dairy Queen was hit with a blizzard of credit card numbers stolen at the end of August. According to Dairy Queen has reported a data breach of their POS (Point of Sale) system when malware authorities are calling “Backoff” was found on the system. Currently the restaurant chain is unclear as to how many stores were affected.

PF Changs
CSO’s Dave Lewis reported: On June 10th 2014 the staff at PF Chang’s received a visit that they didn’t want to have come knocking on their door. The US Secret Service came calling to alert the restaurant chain that they had been compromised by a criminal gang that was stealing data from their point of sale systems.

Shaws and Star Market
Shaw’s and Star Market have more than 50 grocery stores in Massachusetts but there has been no report on how many credit cards were ripped off.

TJ Maxx
At over 45 million, the 2007 data breach at TJ Maxx is the grand daddy of them all. According to Computerworld: In filings with the U.S. Securities and Exchange Commission yesterday, the company said 45.6 million credit and debit card numbers were stolen from one of its systems over a period of more than 18 months by an unknown number of intruders.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at



Rise of smart machines, ubiquitous access and software-defined architectures will reshape IT, Gartner says

ORLANDO—Gartner defines its Strategic Technology Trends as those technologies that have the most potential to drive great change in the enterprise IT arena in the next three years.

Indeed this year’s crop has that potential as trends like software-defined networks and 3D printing take center stage in Gartner’s list.

“You need to be looing at linking to customers in new and unique ways; what technologies set the foundation to enable these moves,” said Gartner vice president David Cearly. IT will be dealing with everything from virtual technologies to intelligent machines and analytics data everywhere, he said. “And in the end all things run through a completely secure environment.”

So Gartner’s Top 10 Strategic Technology Trends for 2015 list looks like this:
1. Computing everywhere: Cleary says the ithe trend is not just about applications but rather wearable systems, intelligent screens on walls and the like. Microsoft, Google and Apple will fight over multiple aspects of this technology. You will see more and more sensors that will generate even more data and IT will have to know how to exploit this—think new ways to track users and their interactions with your company—in an effective, positive way.

2. The Internet of things: Yes this one is getting old it seems, but there’s more to it than the hype. Here IT will have to manage all of these devices and develop effective business models to take advantage of them. Cearly said IT needs to get new projects going and to embrace the “maker culture” so people in their organizations can come up with new solutions to problems.

3. 3D Printing: Another item that has been on the Gartner list for a couple years. But things are changing rapidly in this environment. Cearly says 3D printing has hit a tipping point in terms of the materials that can be used and price points of machines. It enables cost reduction in many cases. IT needs to look at 3D printing and think about how it can make your company more agile. C an it 3D printing drive innovation?

4. Advanced, Pervasive and Invisible Analytics: Security analytics are the heart of next generation security models. Cearly said IT needs to look at building data reservoirs that can tie together multiple repositories which can let IT see all manner of new information – such as data usage patterns and what he called “meaningful anomalies” it can act on quickly.

5. Context-Rich Systems: This one has been a Gartner favorite for a long time – and with good reason. The use of systems that utilize “situational and environmental information about people, places and things” in order to provide a service, is definitely on the rise. IT needs to look at creating ever more intelligent user interfaces linking lots of different apps and data.

6. Smart Machines: This one is happening rapidly. Cearly pointed to IBM’s Watson, which is “learning” to fight cancer, and a mining company – Rio Tinto—which is using automated trucks in its mines. Virtual sages, digital assistants and other special service software agents will about in this world, he said.

7. Cloud/Client Computing: This trend was on last year’s list as well but Gartner says the need to develop native apps in the cloud versus migrating existing apps is the current issue.

8. Software-Defined Applications and Infrastructure: In order to get to the agility new environments demand we cannot have hard codes and predefined networks, Cearly said. IT needs to be able construct dynamic relationships. Software Defined technologies help on that scale.

9. Web-Scale IT: This trend remains pretty much the same as last year. Gartner says Web-scale IT is a pattern of global-class computing technologies that that deliver the capabilities of large cloud service providers. The likes of Amazon, Google and others are re-inventing the way IT services can be delivered. Still requires a cultural IT shift to be successful.

10. Risk-Based Security and Self-protection: Cearly said all roads to the digital future success lead through security. Trends here include building applications that are self-protecting.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at


From electronic pills to digital tattoos, these eight innovations aim to secure systems and identities without us having to remember a password ever again

8 cutting-edge technologies aimed at eliminating passwords
In the beginning was the password, and we lived with it as best we could. Now, the rise of cyber crime and the proliferation of systems and services requiring authentication have us coming up with yet another not-so-easy-to-remember phrase on a near daily basis. And is any of it making those systems and services truly secure?

One day, passwords will be a thing of the past, and a slew of technologies are being posited as possibilities for a post-password world. Some are upon us, some are on the threshold of usefulness, and some are likely little more than a wild idea, but within each of them is some hint of how we’ve barely scratched the surface of what’s possible with security and identity technology.

The smartphone
The idea: Use your smartphone to log into websites and supply credentials via NFC or SMS.

Examples: Google’s NFC-based tap-to-unlock concept employs this. Instead of typing passwords, PCs authenticate against the users phones via NFC.

The good: It should be as easy as it sounds. No interaction from the user is needed, except any PIN they might use to secure the phone itself.

The bad: Getting websites to play along is the hard part, since password-based logins have to be scrapped entirely for the system to be as secure as it can be. Existing credentialing systems (e.g., Facebook or Google login) could be used as a bridge: Log in with one of those services on your phone, then use the service itself to log into the site.

The smartphone, continued
The idea: Use your smartphone, in conjunction with third-party software, to log into websites or even your PC.

Examples: Ping Identity. When a user wants to log in somewhere, a one-time token is sent to their smartphone; all they need to do is tap or swipe the token to authenticate.

The good: Insanely simple in practice, and it can be combined with other smartphone-centric methods (a PIN, for instance) for added security.

The bad: Having enterprises adopt such schemes may be tough if they’re offered only as third-party products. Apple could offer such a service on iPhones if it cared enough about enterprise use; Microsoft might if its smartphone offerings had any traction. Any other takers?

The idea: Use a fingerprint or an iris scan — or even a scan of the vein patterns in your hand — to authenticate.

Examples: They’re all but legion. Fingerprint readers are ubiquitous on business-class notebooks, and while iris scanners are less common, they’re enjoying broader deployment than they used to.

The good: Fingerprint recognition technology is widely available, cheap, well-understood, and easy for nontechnical users.

The bad: Despite all its advantages, fingerprint reading hasn’t done much to displace the use of passwords in places apart from where it’s mandated. Iris scanners aren’t foolproof, either. And privacy worries abound, something not likely to be abated once fingerprint readers become ubiquitous on phones.

The biometric smartphone
The idea: Use your smartphone, in conjunction with built-in biometric sensors, to perform authentication.

Examples: The Samsung Galaxy S5 and HTC One Max (pictured) both sport fingerprint sensors, as do models of the iPhone from the 5S onwards.

The good: Multiple boons in one: smartphones and fingerprint readers are both ubiquitous and easy to leverage, and they require no end user training to be useful, save for registering one’s fingerprint.

The bad: It’s not as hard as it might seem to hack a fingerprint scanner (although it isn’t trivial). Worst of all, once a fingerprint is stolen, it’s, um, pretty hard to change it.

The digital tattoo
The idea: A flexible electronic device worn directly on the skin, like a fake tattoo, and used to perform authentication via NFC.

Examples: Motorola has released such a thing for the Moto X (pictured), at a cost of $10 for a pack of 10 tattoo stickers, with each sticker lasting around five days.

The good: In theory, it sounds great. Nothing to type, nothing to touch, (almost) nothing to carry around. The person is the password.

The bad: So far it’s a relatively costly technology ($1 a week), and it’s a toss-up as to whether people will trade typing passwords for slapping a wafer of plastic somewhere on their bodies. I don’t know about you, but even a Band-Aid starts bothering me after a few hours.

The password pill
The idea: This authentication technology involves ingesting something into your body — an electronic “pill” that can send a signal of a few bits through the skin.

Examples: Motorola demonstrated such a pill last year, one produced by Proteus Digital Health normally used for gathering biometrics for patient care (pictured).

The good: A digital pill makes the authentication process completely passive, save for any additional manual authentication (e.g., a PIN) that might be used.

The bad: Who is comfortable (yet) with gulping down a piece of digital technology? Like the digital tattoo, this doesn’t sound like something one would want to use regularly, but rather more as a day pass or temporary form of ID.

Voice printing
The idea: Use voice recognition to authenticate, by speaking aloud a passphrase or a text generated by the system with which you’re trying to authenticate.

Examples: Porticus, a startup profiled back in 2007, has an implementation of this technology (“VoiceKeyID”), available for multiple mobile and embedded platforms.

The good: The phrase used to identify you isn’t the important part; it’s the voice itself. Plus, it can be easily changed; speaking is often faster than typing or performing some other recognition; and it’s a solution that even works in a hands-free environment. Plus, microphones are now standard-issue hardware.

The bad: As with any technology that exists in a proprietary, third-party implementation, the hard part is getting people to pick up on it.

Brainwave authentication
The idea: Think your password and you’re logged in. That’s right: an authentication system that uses nothing but brainwaves.

Examples: A prototype version of the system, using a Bluetooth headset that contained an EEG sensor, has been demonstrated by folks at the University of California Berkeley School of Information. The “pass-thoughts” they used consisted of thinking about some easily memorized behavior, e.g., moving a finger up and down.

The good: Consumer-grade EEG hardware is cheap, and the tests conducted by the School of Information showed it was possible to detect a thought-out password with a high degree of accuracy.

The bad: Donning a headset to log in seems cumbersome — that is, assuming you’re not spooked by the idea of a computer reading your thoughts.


Best Microsoft MCTS Training – Microsoft MCITP Training at

9 Rules for the Developer-ization of IT

Written by admin
September 15th, 2014

CIOs who want to drive innovation and change in their organizations should focus on making the lives of developers easier so they can innovate, produce great apps and deliver valuable IP.

The acceptance of SaaS, the cloud and other easily accessible technologies allow the lines of business to drive innovation without necessarily turning to IT for help.

While the CIO can take back some of that ground by becoming a broker and orchestrator of services, the real key to driving innovation and change today is app development, according to Jim Franklin, CEO of SendGrid, a largest provider of email infrastructure as a service.

Franklin says that much like the consumerization of IT that has been underway for several years, CIOs now need to embrace the “developer-ization of IT,” which is about allowing developers to focus on innovating, producing great apps and delivering valuable IP.

Rule 1: Embrace the Public Cloud
The public cloud offers developers access to scalable, flexible infrastructure. With it, they can scale up as necessary while consuming only what they need. The efficiencies created by the scale at which public clouds operate are something you just can’t replicate on your own.

“There’s no need to reinvent the wheel by building servers, storage and services on your own,” Franklin says. “This will shave precious time off of project schedules, reduce time to market and lower costs significantly.

Rule 2: Adopt Enterprise Developer Marketplaces
Access to marketplaces full of enterprise-ready tools and APIs will allow your developers to build better applications faster.

“Embrace the new breed of enterprise developer marketplaces,” Franklin says. “Give developers access to more tools that are enterprise-ready. An emerging set of marketplaces from Windows Azure, Heroku and Red Hat provide a variety of new tools and services to ramp up application development productivity.”

Rule 3: Forget Long-Term Contracts for Tools and Services
A long-term contract for a service or tool may make financial sense, but can be a barrier to developer efficiency and agility. Instead, make it as easy as possible for developers to self-select the best tool for the job at hand.

“The nature of application development can be very transitory at times,” Franklin says. “Developers may need one service or tool one day and then pivot on to something else the next and they like to try and test tools before they make a financial commitment. Make the process of using different tools and vendors frictionless for them so they can self-select the tools they want. Long-term contracts impede this since approvals are needed from procurement or legal and this can draw out the process.”

Rule 4: Recognize that Developers Speak Their Own Language
When trying to communicate with developers — whether you’re trying to attract talent, project manage or target them for sales — tailor your messages for a highly technical audience and use the communication channels they’re comfortable with. This could mean user forums, hackathons or social media.

“The key is trying to be very flexible and open,” Franklin says. “Hackathons have really become a strong trend because of the power of letting people show their creativity in a lot of different ways.”

Rule 5: Give Developers Freedom With Controls
Creative solutions require the ability to experiment freely. Embrace that, but also put some controls in place for your own peace of mind. Franklin suggests deploying API management solutions and monitoring tools so IT can have a window in the traffic flowing through the network and can ensure security measures are taken into account.

Rule 6: Don’t Get Locked Down by a Platform or Language
Encourage your developers to build apps that are platform agnostic across Web, mobile and for Internet of Things devices from the start. Designing apps to be platform agnostic in the first place can save developers a lot of grief in the long-term.

“Rather than building for the Web and then adding a mobile extension later, developers should keep this in mind at the start of the app development process,” Franklin says. “If an application has a physical aspect to it, developers should be encouraged to define, deploy, communicate and manage the IoT application in a scalable fashion from the start.”

Rule 7: Give Developers Space to Pursue Their Own Projects
Developers are creative people with a natural love for making things. They’ll be happiest if you provide them with a collaborative, creative outlet for their own projects, and they may just solve an intractable problem in the process.”

“While the Google 20 percent idea — where employees used to take one day a week to work on side projects — may not work for everyone, understand that developers have an inherent desire to share new tools, hacks, shortcuts and passion projects with their peers,” Franklin says. “Give them time to do this at work. Some of these ideas may end up in your products. Gmail, AdSense and Google Hangouts, for example, all started as side projects of Google employees.”

Rule 8: Set Standards for Coding in RESTful, Modern APIs
Issue a set of best practices related to usable standards like REST. This, Franklin says, will allow developers to more rapidly build applications that access and act upon data exposed via APIs, even in environments with unreliable network speeds and limited computing power.”

REST also makes it easy for humans to understand what’s being exchanged while allowing computers to talk to one another efficiently,” Franklin adds.

Rule 8: Set Standards for Coding in RESTful, Modern APIs
Issue a set of best practices related to usable standards like REST. This, Franklin says, will allow developers to more rapidly build applications that access and act upon data exposed via APIs, even in environments with unreliable network speeds and limited computing power.”

REST also makes it easy for humans to understand what’s being exchanged while allowing computers to talk to one another efficiently,” Franklin adds.


Best Microsoft MCTS Training – Microsoft MCITP Training at


Guys, stop creeping out women at tech events

Written by admin
September 10th, 2014

Not realizing that your behaviors constitute harassment is no excuse

I go to a lot of security conferences, but I never gave much thought to this curious fact: The conferences are hardly ever headlined by women. In fact, not a lot of women attend security conferences and other tech events.

I guess, if I noticed this at all, I chalked it up to the general dearth of women in the technology field. But the scarcity of women at events goes beyond that, and my eyes have only recently been opened to this fact and the reality that explains it.

My awakening began through my role as president of the Information Systems Security Association. I’ve been leading the creation of special interest groups (SIG) with the goal of making ISSA a more virtual organization that could bring together people with common technical interests from around the world. I was somewhat surprised that common technical interests were not driving the most energetic SIG, Women in Security.

I was curious. Why were women so interested in banding together? As I started asking questions, it began to make sense. And I found in all this a message for men.

IT guys, women are uncomfortable around us. Enough of us are acting like creeps around them that they would rather not join us in large groups. Even in a virtual setting like SIGs, they would rather get together with other women.

My questioning revealed to me that women are harassed on a regular basis at professional events. The harassment is often minor, but there have been cases of physical assault and even rape. Part of the problem is that to too many men in IT, a lot of the minor harassment incidents at tech events — men putting their arms around women’s shoulders, men hitting on women, men telling women graphic details about their sexual exploits — sound like no big deal.

Men who feel that way are not adept at empathy. They do not see that sexual overtures made to a woman are threatening in a way that sexual overtures made to a man rarely are. They do not imagine that a woman might need to maintain a personal physical space in order to feel safe. They might not even understand that hitting a woman on the buttocks is physical assault.

What everyone needs to understand is that even minor harassment is a significant reason for women to avoid tech events, networking opportunities and conferences. Maybe you would be flattered to be propositioned by a woman at an event. That doesn’t mean that a woman is going to feel the same way. Most women don’t like it. Especially in a professional setting. And even more so when she is one of very few women in a room full of men who are leering at her as if she were a zebra walking through a pride of lions.

Guys, you know what it means to be professional, don’t you? And you know that a professional conference is not the same as a pickup bar or a frat mixer, right? All right, then, if you tell a woman that she gave a great presentation, don’t follow it up by hitting on her. That pushes professionalism right out the window. So does invading a circle of people who are networking at an event and putting your arm around the lone woman.

These are all things that have actually happened, and in every case, the women involved told me, they did nothing to encourage the harassing behavior. So what gave those guys the idea that they had license to do these things? Ignorance, and the fact that many men within the tech industry are socially awkward. This is not offered as an excuse, but a lot of techies just don’t know when they are making a woman uncomfortable. I’m sure that I have unintentionally offended someone at some point in my career. But techies are also very smart, and knowing that this problem exists, we are capable of changing our behaviors so that women don’t keep paying the price of our social awkwardness.

That said, though, some men in the tech industry intentionally practice sexually aggressive behaviors. That is completely unacceptable, and it is why I support the Ada Initiative in its efforts to set up a code of conduct for events. (I’m not comfortable with reports that the Ada Initiative may have been involved in censorship at a B-Sides security conference, but that is another matter.) Event organizers and other men in general need to take a stance to filter out these people proactively. Likewise, the women have to speak up and let the event organizers know who the serial harassers are.

One final observation: If harassment itself isn’t disturbing enough, many women blame themselves when they are its victims. They feel that they should have been strong enough to confront the harasser and tell him to stop, but failed to do so because they found themselves surrounded by strangers and caught off guard. With all of those eyes on them, they didn’t want to seem like troublemakers. And they have seen, time and time again, women who complain about harassment being called uptight bitches who are just imagining things or making them up. So they just swallow their pride and keep quiet. But later, they blame themselves for not standing up to the perpetrator.

After talking to many women in the tech profession in recent weeks, I have learned that a lot of them have decided that networking and participating at tech events is not worth the grief that they have to put up with. Clearly, the tech profession has a problem if women feel forced to make this choice. And, guys, it would be morally reprehensible if we didn’t do all we can to rectify that situation.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at


Automation technology is getting better as help desk requests continue to rise

Competing forces are affecting people who work on help or service desks. One is improving automation tools, which advocates say can replace level 1 and 2 support staff. At the same time, the number of help desk tickets is rising each year, which puts more demand on the service desk.

These cross-currents in the industry make it hard to predict the fate of some IT jobs. A Pew survey, released in August, of nearly 1,900 experts found a clear split on what the future may bring: 52% said tech advances will not displace more jobs than they create by 2025, but 48% said they will.

Either way, a push toward automaton is certain. In the help desk industry, the goal is to keep as many calls for help at either Level 0, which is self-help, or Level 1, as possible. It’s called “shift-left” in the industry.

“It costs way more to have a Level 3 or Level 2 person to resolve an issue, and it also takes a lot more time,’ said Roy Atkinson, an analyst at HDI, formerly known as the Help Desk Institute. To keep costs down, help desks are increasingly turning to automation and improvements in technologies such as national language processing, he said.

A Level 1 worker will take an initial call, suggest a couple of fixes, and then — lacking the skill or authority to do much more — escalate the issue. The Level 2 worker can do field repair work and may have specific application knowledge. A Level 3 escalation might involve working directly with application developers, while Level 4 means taking the problem outside to a vendor.

Among the companies developing automation tools is New York-based IPsoft, a 15-year old firm with more than 2,000 employees. It develops software robotic technology and couples it with management services.

A majority of IT infrastructure will eventually be “managed by expert systems, not by human beings,” said Frank Lansink, the firm’s CEO for the European Union. IPsoft says its technology can now eliminate 60% of infrastructure labor tasks.

IPsoft’s autonomic tools might discover, for instance, a network switch that isn’t functioning, or a wireless access point that is down. The system creates tickets and then deploys an expert system, a software robot with the programming to make the repair. If it can’t be done, a human intervenes.

Many service desk jobs have been moved offshored over the last decade, displacing workers. That trend is ongoing. One of the ideas underlying IPsoft’s business models is a belief that offshore, as well as onshore, labor costs can be further reduced through automation.

Offshore firms are clearly interested. IPsoft’s platform was adopted last year by Infosys and, more recently, by Accenture.

One IT manager using IPsoft’s automation technology and services to support his firm’s infrastructure — including its network, servers and laptops — is Marcel Chiriac, the CIO of Rompetrol Group, a Romania-based oil industry firm with 7,000 employees serving Europe and Asia.

“Without the automation, we would have to pay a lot more” for IT support, said Chiriac.

The cost savings arise from automatic repairs and routine maintenance that might otherwise be neglected, said Chiriac.

If he weren’t using autonomic tools, Chiriac said he would have to hire more people for a similar level of service. But he can’t easily estimate the impact on staff because of the firm’s IT history. (Rompetrol Group outsourced its 140 IT staff, ended that relationship, then rebuilt an internal IT staff with about two dozen fewer workers; it also uses outsourcing as a supplement.)

Nonetheless, Chiriac doesn’t believe that infrastructure automation will necessarily eliminate IT jobs, though it may shift them to other IT areas. “In IT, we’re not going to run out of work for the next two generations,” said Chiriac.

The work that help or service desks are asked to take on is increasing. Two-thirds of 1,200 organizations surveyed by HDI reported that the number of tickets, either to fix something broken or to outfit a new hire or change permissions, for instance, os increasing annually by more than 60%.

The top five reasons for this increase, according to HDI’s survey, is an increase in the number of customers at surveyed firms, a rising number of applications, changes in infrastructure, increases in the scope of services, and the need to support different types of equipment and more devices. That latter could reflect BYOD use.

At the same time, support is being transformed in new ways. Service desks may, for instance, now act as a liaison for all service providers, including cloud and mobile carriers, said Atkinson.

“I think a lot of people have been predicting the death of support for a number of years, and it hasn’t happened,” said Atkinson.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at


Google is drawing from the work of the open-source community to offer its cloud customers a service to better manage their clusters of virtual servers.

On Monday, the Google Cloud Platform started offering the commercial version of the open-source Mesos cluster management software, offered by Mesosphere.

With the Mesosphere software, “You can create a truly multitenant cluster, and that drives up utilization and simplifies operations,” said Florian Leibert, co-founder and CEO of Mesosphere. Leibert was also the engineering lead at Twitter who introduced Mesos to the social media company.
10 of the Most Useful Cloud Databases

First developed by the University of California, Berkeley, Mesos can be thought of as an operating system that allows an administrator to control an entire cluster of computers, or even an entire data center, as if it were a single machine.

Thanks to its fine-tuned scheduling capabilities, Mesos can allow multiple frameworks, such as Hadoop or Spark, to share a single cluster, as well as allow multiple copies of the same framework to run on a single cluster.

The software also has built-in resiliency: If one or several nodes stop working, the software can automatically move that work to other, operational nodes in that cluster.

Twitter, Airbnb, Netflix and Hubspot have all used Mesos to coordinate operations.

Google has modified its new software for managing Docker containers, called Kubernetes, so it can run on Mesos, work Google also announced Monday.

Google has been an ardent user of Docker internally, using more than 2 billion containers a week in its routine operations. The open-source Docker provides a container-based virtualization, which is an alternative to traditional virtualization workloads now being considered by many organizations, due to its putative performance superiority.

Now, Google customers can use Mesosphere cluster to run Docker containers and use any leftover capabilities to run other framework-based workloads.

“You’ll be able to create these modern distributed systems the way that Google does, and you’ll be able to run them side-by-side with all your existing applications,” said Craig McLuckie, Google Cloud Platform product manager.

Users can also move their workloads to any cloud provider that runs Mesos, eliminating the dependencies that can come with writing the applications to run on a specific cloud service, be that Google’s or some other vendor’s.

Google’s Mesosphere cluster package also includes the Apache Zookeeper configuration software, the Marathon scheduling software, as well as OpenVPN for logging into the cluster.

Use of Mesosphere on the Google Cloud Platform is not billed separately; it is included in the price of running a cluster.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at



7 cool uses of beacons you may not expect

Written by admin
August 13th, 2014

7 cool uses of beacons you may not expect
Beacons are a useful new twist on location awareness, tying a unique ID to a small device that apps can use to discover not just location but other relevant context about the beacon’s location or what it is attached to.

Most beacon trials involve spam-like use by retailers, such as shoppers getting pitches for specific brands of cereal while walking down the cereal aisle, but there are better uses of beacons being piloted. These examples come from Onyx Beacons and StickNFind, two beacon vendors that work with a wide range of industries, reflecting actual pilot projects.

Identify passengers stuck in airport security
A consortium of airlines and airports is testing placement of beacons at security lines in airports. Passengers’ airline apps would know those people are in the security line, and airlines could thus know if there are people at risk of missing their flight so they can send out reps to get them or hold the plane. The same notion could be used at airport gates to monitor people who risk missing connecting flights or whose gates have changed — even helping them get to their new gate faster.

Remember to take out the garbage
Imagine: You affix a beacon to your garbage can, and your task manager knows that Thursday is garbage pickup day. If you go past your garbage can on Thursday, a BLE auto-connection sends you a reminder to take it out. Combine that with GPS location detection from your phone, and your app will know if you did actually move the can to the curb and not remind you.

Track things smarter
All sorts of industries are looking at affixing beacons to pallets, carts, and other movable equipment to track location as they move about. Think airline cargo containers, hospitals’ computers-on-wheels, warehouse pallets, museum artwork, bulldozers at a construction sites, contractors at a job site, even hospital patients, students, or visitors (so they don’t get lost and their movement patterns can be discovered, such as for space planning).

Authorize access to cars, buildings, and more
We already have Bluetooth fobs to unlock our cars so we can drive them and Bluetooth locks that know it’s your iPhone at the door. Kaiser Permanente uses Bluetooth badges to authorize physician access to their individual accounts in shared computers in each exam room. The same can be done with tablets.

So it’s no surprise that companies are exploring the use of beacons and people’s own mobile devices as access systems in their buildings or to unlock and start your car rather than use proprietary radio readers or fobs.

Navigate buildings and other spaces
When you visit a customer, you often get lost when trying to find a conference room, bathroom, or kitchen. Beacons can be used both as virtual “where am I?” kiosks and as monitors of your movement, so an app can guide you to your destination — and alert the company if you wander where you shouldn’t.

Plus, you can get information about where you are, whether about the artist whose painting you are viewing in a museum or the instructions for the copier you’re trying to operate — even just the Wi-Fi password for the conference room’s public hotspot.

Check out ski conditions at the lift
One ski resort is placing beacons at ski lift entrances not just to track the number of people using each lift, but to let skiers check the conditions for the runs available at each lift before they get on the lift. If a resort’s app had information about your age or skiing skills, it could even suggest that you should try a different run and thus go to a different lift.

Provide smarter signage
Signs are great, but limited. They provide only the information they provide, and they are often limited to one or two languages. If signs were beacon-enabled, users could get translations in their languages and get more information than any sign could hold. You can imagine such uses in zoos, museums, and botanical gardens, but also amusement parks, airport lobbies, and hospitals.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at