Archive for the ‘ Tech ’ Category

Where to find security certifications

Written by admin
November 19th, 2014

Some say they are essential to a successful security career. Others argue they are an outdated concept and a waste of time. Despite the debate, here are 10 places to further learn about the security trade and the certifications required for some jobs in the info sec field.

Security certifications
The debate rages on whether gaining security certifications means much. Regardless of whether you think they aren’t even worth the paper they are printed on, there are others who believe certifications prove the individual knows what they are doing. With that, here are a group of vendors who offer security certifications.

According to its site: “We were there for the first internet security incident and we’re still here 25 years later. We’ve expanded our expertise from incident response to a comprehensive, proactive approach to securing networked systems. The CERT Division is part of the Software Engineering Institute, which is based at Carnegie Mellon University. We are the world’s leading trusted authority dedicated to improving the security and resilience of computer systems and networks and are a national asset in the field of cybersecurity.”

Certified Wireless Network Professional
“The CWSP certification is a professional level wireless LAN certification for the CWNP Program. The CWSP certification will advance your career by ensuring you have the skills to successfully secure enterprise Wi-Fi networks from hackers, no matter which brand of Wi-Fi gear your organization deploys.”

CompTIA has four IT certification series that test different knowledge standards, from entry-level to expert. A list of certifcations can be found here.

Global Information Assurance Certification
According to its site: “Global Information Assurance Certification (GIAC) is the leading provider and developer of Cyber Security Certifications. GIAC tests and validates the ability of practitioners in information security, forensics, and software security. GIAC certification holders are recognized as experts in the IT industry and are sought after globally by government, military and industry to protect the cyber environment.”

Information Assurance Certification Review Board
As stated in its charter: “The Board will sponsor a world-class certification set, that meets or exceeds the needs of organizations and individuals wishing to hold candidates to the highest possible level of professional certification in the area of information security. As much as it is feasible, the Board will remain independent from any commercial organization.”

The IACRB currently offers certifications for 7 job-specific responsibilities that reflect the current job-duties of information security professionals.

International Information Systems Security Certification Consortium
According to its Web site: “The (ISC)² CBK is the accepted standard in the industry and continues to be updated to reflect the most current and relevant topics required to practice in the field.”

To find out more about what it offers, go here.

“As an independent, nonprofit, global association, ISACA engages in the development, adoption and use of globally accepted, industry-leading knowledge and practices for information systems,” according to its Web site.

To find out more about the training offered, go here.

McAfee Institute
McAfee’s motto is “Your Place to Learn Real-World Skills & Advance your Crime-Fighting Career!”

Find out more about its certifications here.

Mile2 is a developer and provider of proprietary vendor neutral professional certifications for the cyber security industry. MIle2 has a laundy list of internationally recognized cyber security certifications.

Security University
According to its site: “Since 1999, Security University has led the professional cybersecurity education industry in hands-on information security training & education. Security University provides uniform IT security workforce training with performance based, tactical hands-on security skills that qualify and validate the workforce so less people can do the same job or more, with consistent cybersecurity skills.”

If you have taken other certification courses, please let us know how they went in the comments section.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

6 ways to maximize your IT training budget

Written by admin
November 15th, 2014

Customized, in-house training often zeros in on topics relevant to your business. However, it comes with an equally high price tag. If your employees simply need to fill in knowledge gaps or get up to speed with a specific software package, there are a plethora of affordable, flexible options for even the most limited budgets.

Although the economy is picking up ever so slightly, IT departments remain on the lookout for ways to do more with less – fewer people, fewer resources, less money. That’s why learning how to stretch the training budget as far as possible can pay significant dividends. This is true both for those organizations seeking to develop employee skills and knowledge for the least expenditure, and for employees looking to improve and enhance their career potential and longevity.

If an organization can get its employees to buy into training and career development, they can literally double their dollars when costs get split 50-50. This is already an implicit aspect in many tuition support programs, where employers offer a partial stipend or payment to help cover the costs of academic coursework. Why not make it a part of how IT training dollars get spent, too?

Some IT departments offer their employees a menu of courses or certifications from which employees can choose, coupled with (partial) reimbursement plans to help defray their costs. By offering more support for those credentials it needs the most, and less for those credentials outside the “must-have” list, organizations can steer employees in the directions they want them to go.
Negotiate Discounts to Control Costs

Times are tough for training companies, too. If you do want to buy into online or classroom training, you’ll get a better bang from your budget if you negotiate a “group rate” of sorts to cover some or all of your training needs.

Although online or virtual classes may not be as popular as instructor-led in-class training, remote offerings usually cost less to begin with; obtaining additional discounts will help leverage such spending even further. Some training companies offer subscriptions to their entire training libraries on a per-seat, per-month basis.

Pluralsight offers its extensive training catalog to individuals for about $50 a month, for example, and its business offerings include progress tracking and assessments for enrolled employees, as well as library access for some number of individuals. A 10-user license costs about $25 per month, per individual user for a Basic package, and double that for their Plus package, which adds exercises, assessments and offline viewing to the basic ability to watch courses online on a PC or mobile device.
Purchase Key Items in Bulk

If you know you need to run a team of system engineers or senior tech support staff through a specific curriculum that includes certain certification exams, and you can hold those people to a schedule, then you can purchase exam voucher or training/voucher bundles at a discount. As the purveyor of many popular and high-demand cert exams, and a publisher of copious related training materials, Pearson VUE/Pearson Education offers much of what employers need for such programs. Contact the Voucher Store to inquire about volume purchase pricing and arrangements.

(Note: The author writes on an occasional basis for InformIt, a professional development branch of Pearson, and on a frequent basis for the Pearson IT Certification blog.)
Assemble Employee Study Groups and Resources

Just a little added support for employees involved in training, or preparing for certification, can help organizations realize better results from (and returns on) their training investments. Consider some or all of the following strategies to help employees make the most of their training experience and get the best value for your training dollars

Set up a wiki or online forums/chat rooms on a per-topic or per-exam basis for employees to use and share.
Encourage employees to share their best resources, learning materials, study techniques and so forth with one another. Build compendia of such materials and pointers for ongoing sharing.
Provide access to practice tests, exercises and simulated or virtual labs for hands-on work so employees can check their learning, buttress their weak spots and develop a well-rounded understanding of training materials, exam objectives and coverage.
Identify local subject matter experts to whom training and certification candidates can turn for added information and explanation when the

Because many employees will be interested in these kinds of things, you can find volunteers to help create and maintain these kinds of low-cost but high-value training and prep tools and resources.

Provide Recognition and Rewards to Those Who Succeed

Sure, it would be nice if everyone who earns a certification or masters some new body of knowledge could get a 25 percent raise and/or a promotion as a consequence of completing a program of some kind. In some cases, such rewards may even be required to retain employees who earn coveted credentials such as the Cisco CCIE, (ISC)2 CISSP or the ITIL Master Qualification.

However, even small rewards, such as a $100 gift certificate for a family night out or a gift card to a local department store can communicate your appreciation to those employees who manage to chew, swallow and digest what they must bite off to pursue training and certification. A public pat on the back in the employee newsletter or at a period employee meeting doesn’t hurt, either. Recognition provides added impetus for employees to finish what they start and shows them that you value the time and effort they must expend in pursuing training and certification.
Ask for Ideas and Suggestions, Then Act Upon Them

Beyond the various methods to stretch your training budget outlined here, you can also turn to your target audience to ask how it thinks you can maximize the return on training and certification. You may be surprised by the quality and quantity of resulting feedback. Most employees respond positively to on-the-job opportunities for career and professional development. They, too, understand that the likelihood of continuing support rests on the outcomes of their training and certification efforts. In the end, they know full well that, by helping the organization excel and improve, they too will benefit from improved job and pay prospects.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

3 power-sipping monitors lower energy bills

Written by admin
November 8th, 2014

Can you have a great monitor that also scrimps on electricity — and helps the environment? We test three 27-in. power-saving displays to find out.

While businesses have always been careful about how much they spent on electricity, today’s displays are making it a lot easier to keep bills lower — and the environment safer.

For example, I measured a four-year-old 27-in. Apple Cinema Display as using 101 watts of power. However, the three 27-in. displays that I’ve tested in this roundup of environmentally smart monitors — AOC’s E2752VH, Dell’s UltraSharp 27 UZ2715H and the Philips 271S4LPYEB — use an average of 26.4 watts.

This is also reflected in cost in electricity to use the monitor per year. Assuming the display is used for 10 hours every business day and that power costs 12 cents per kilowatt-hour (the national average), the Apple Cinema display costs $29 for the year while the three reviewed here average an annual cost of $9.32. Doesn’t sound like a lot, but if you’re a business with a couple of hundred employees, it can add up quickly.

In addition, all three of these displays all carry the EPA’s EnergyStar logo and boast wide screens that can display full HD resolution. And according to the EPA’s Greenhouse Gas Equivalencies Calculator, every kilowatt-hour of power saved equals 1.5 lbs. of carbon dioxide that isn’t spewed into the atmosphere.

Interestingly, these displays have different strategies as to how they reduce their power consumption. The Dell screen relies on the computer’s screen saver to signal when it’s time to go to sleep. The AOC adds a built-in timer for determining when it turns the screen off. And the Philips display includes a pair of infrared sensors that continually scan to see if someone is sitting front of the screen. When you get up, it senses that the space in front is empty and shuts the screen down, reducing its power draw.

Of course, saving on electricity doesn’t mean a thing if a display doesn’t excel in its main purpose. To see what these displays have to offer, I used them every day for a couple of months in my office. I took turns with them one-on-one and then viewed them together showing the same material for comparison.

Saving a few kilowatts here and there might not sound like a huge savings. But, if you multiply the savings by the number of monitors in use every day in a company’s offices, it adds up quickly.

If you’re looking for a frugal monitor, AOC’s E2752VH is not only the cheapest display of the three, but is the only one that uses no measurable power when in sleep mode. However, it falls short on creature comforts like a webcam and USB ports.

Like the others reviewed here, the AOC monitor uses a 27-in. IPS panel with a 1920 x 1080 resolution. It features an ultra-fast 2-millisecond (ms) response time, versus 5ms and 8ms for the Philips and Dell displays, respectively.

The all-black casing is broken only by a small blue light in the lower right corner to show it’s turned on. On the right side of the monitor’s front, there are controls for turning it on and off, raising and lowering the volume, and using the on-screen menu. The marking for each switch’s function is embossed in the display’s plastic case; classy, but I found the highlighted white markings on the other two displays easier to read.
Saving power

The AOC monitor has two techniques for saving power when it’s not being used. First, like the Dell display, it can use the computer’s screen saver to trigger its sleep mode. It can also be configured to shut down when the computer is off or goes to sleep. In addition, a timer lets you shut down the screen after a period of inactivity. Unfortunately, the time can be configured in increments of one hour only.

The AOC consumed 27.5 watts when being used, a little more than the Dell display’s power profile. Unlike the others, when the AOC screen goes to sleep, it uses no discernible power, compared to 1.1 watts and 2.1 watts for the Dell and Philips displays, respectively. It took 3.1 seconds to wake up.

Assuming it is used for 10 hours every business day and power costs 12 cents per kilowatt-hour (the national average), the AOC should cost an estimated $8.30 to use per year. That makes it the cheapest of the three to use, if only about $2 a year less than the Philips monitor.
How well it worked

At 215 candelas per square meter, the AOC’s light output was the lowest of the three; to my eyes, it looked visibly dimmer than the Dell monitor. Its color balance appeared accurate with strong blues and reds. Video play was smooth, with no lags or glitches.

In addition to a standard mode, the display has settings for text, Internet, games, movies and sports. For those who want to tweak the output, the monitor has adjustments for brightness, contrast, gamma and color temperature. Unfortunately, temperature settings are restricted to normal, warm, cool and sRGB settings. You can fiddle with the red, blue and green colors, but I preferred using the Philips’s more extensive presets that are based on actual color temperatures.

The monitor also comes with two Windows-only apps. iMenu lets you adjust brightness, contrast and gamma, but lacks the calibration patterns of the Philips display. Interestingly, several of the program’s labels are in Chinese characters, making the app hard to fathom without the manual.

The eSaver app is how you tell the monitor when to go to sleep, based on the status of your PC. For example, I set it to turn off one minute after the computer is shut down or 10 minutes after the computer goes to sleep or the screen saver comes on. Neither of the other monitors reviewed here can match this specificity.

The AOC display makes do with two 2.5-watt speakers; there is no webcam, microphone or USB hub. The speakers are on the bottom edge of the display, so that they sound thin and don’t get nearly as loud as the Dell’s sound system.
Other features

Its assortment of ports (one DVI, one HDMI and one VGA) lacks the Dell’s second HDMI port and the Philips’s DisplayPort input. But the AOC ports are all horizontally oriented, while the other two displays have vertical ports that are more awkward to plug in.

I did appreciate the addition of an analog audio input, which I used to connect to my phone’s output to listen to music while working. The display also has a headphone jack in the back.

The AOC stand was the easiest of the three to set up, because the base snaps into the monitor arm. Like the others, the monitor has standard VESA mounting holes on the back for screwing it into a third-party stand. However, the only way to adjust the stand is to tilt it up to 3 degrees forward or up to 17 degrees back. It can’t go up and down, swivel or rotate.
Bottom line

With a three-year warranty, the AOC is available at prices starting under $200, the least expensive of the three. If you just need a basic monitor that can offer some savings in electric bills, this is a good choice, but its lack of a webcam among other features may limit its usefulness.

The Dell UltraSharp 27 may be the most expensive of the three displays reviewed here, but it delivers the best mix of screen and multimedia accessories.

The gray and black monitor takes up the least desktop space of the three, something to consider if you’re part of a company that is tight on cubicle space. Built around an IPS panel that offers 1920 x 1080 resolution, the Dell uses hardened anti-glare glass.

There are controls up front for turning the display on and off, using the on-screen menu, and turning the volume up or down; there’s also a handy mute button. I was surprised and impressed by a button with a telephone receiver icon that can initiate or answer a phone call over Microsoft’s Lync VoIP system. (To get this to work, you’ll need to link the screen with a PC via a USB cable.)

The Dell comes with a seductive-sounding PowerNap feature, which triggers the display’s sleep mode when the computer’s screen saver comes on. The monitor first dims the screen’s brightness and then shuts itself down. The screen comes back on when the host computer’s screen saver shuts off. In my tests, the screen woke up in 1.5 seconds.

While being used, the Dell UltraSharp 27 consumed 23.6 watts of power, the least amount of the three. This drops to 1.1 watts when in sleep mode, half what the Philips monitor uses in the same mode.

Based on a typical usage scenario (assuming it’s on for 10 hours a day for every business day and in idle mode the rest of the time, and that power costs 12 cents per kilowatt-hour), this adds up to an estimated annual power cost of $9.45, halfway between the higher-cost Philips and less expensive AOC monitors.
How well it worked

The display was able to deliver 246 candelas per square meter of brightness, the brightest of the three reviewed here. Its reds and greens were spot on, but the screen’s blues appeared slightly washed out. The screen was able to render smooth video; however, its video response time of 8ms is the slowest of the three.

The Dell monitor’s on-screen menu has controls for tweaking brightness, contrast and sharpness as well as adjusting the display’s gamma settings for using a Windows PC or a Mac. To do any meaningful customization, though, you’ll need to load the included Display Manager software. This application, which only works with Windows PCs, includes the ability to change the display’s color temperature as well as choose among Standard, Multimedia, Gaming or Movie modes.

The Dell is a fine all-around monitor; it excels at delivering all the audio-visual accessories that a modern desktop requires. These include an HD webcam for video conferences as well as a dual-microphone array that does a good job of capturing your voice while reducing noise.

The display’s pair of speakers sounded surprisingly good and can actually get too loud for an office. There is a headphone jack on the side, but the Dell lacks the AOC’s audio-in jack.
Other features

The Dell display has the best assortment of ports as well, including one VGA, one DisplayPort and two HDMI ports, both of which can work with an MHL adapter and a compatible phone or tablet. The ports are oriented vertically rather than the more convenient horizontal orientation of the AOC display.

Unlike the others, the display has two USB 2.0 ports and a single USB 3.0 port. All of its cables can be routed through a hole in the back of the monitor’s stand to keep them tidy. On the other hand, the stand’s adjustability is limited: It tilts forward by 5 degrees and back by 22 degrees, but it can’t go up and down, rotate or swivel.

As is the case with the AOC and Philips displays, you can remove the display from the stand and use its VESA mounting holes for use with a third-party stand or mounting hardware. You just need to press a spring-loaded button to release the panel from the stand.
Bottom line

The Dell UltraSharp 27 includes a three-year warranty and it has a list price of $450, considerably more than the AOC’s cost. (Note that the cost of the Dell changed several times while this review was being written and edited.) But given that, the Dell provides all the accoutrements needed for doing desktop work without wasting power.

As minimalist as a monitor gets these days, the Philips 271S4LPYEB is not only power-aware but knows when you’re is sitting in front of it and can automatically go to sleep when you’re not. Too bad it lacks creature comforts like a webcam, speakers or even an HDMI port.

The all-black display houses a 1920 x 1080 IPS panel that is rated at 5ms response time.

Perhaps the most interesting feature is pair of infrared sensors that perceive whether someone is sitting in front of the screen. Called PowerSensor, the system can be set to four different distances (user to screen) between 12 in. and 40 in. It’s quite an impressive trick. One minute after the space in front of the display is vacated, the image dims; two minutes later, the screen goes black. Then, like magic, the screen lights back up when you sit in front of it. When I tried it, the screen came back to life in less than a second.

After some fiddling (to figure out which distance setting was best for me), I found it worked well and quickly responded to my absence and return. I was able to fool it, though, by leaving my desk chair with its back to the screen.

The Philips used 28.8 watts of power in Office mode, which was similar to the standard modes of the other two displays (however, power use varied only slightly with the other modes). When the PowerSensor kicked in, the power demand was initially reduced to 10.6 watts for one minute and then to 2.1 watts.

Ironically, though, the Philips turned out to be the highest power user of the three — probably because of the overhead required to keep the PowerSensor active and ready to restart the display. All told, using my assumptions that it was used for 10 hours a day for every business day and that electricity costs 12 cents per kilowatt-hour, the display had an estimated annual operating expenses of $10.20.

In the front, the Philips monitor has a control for fine-tuning the PowerSensor along with others for turning the display on and off and working with the screen’s menu. A large bluish-green LED shows that the display is turned on. There are also buttons for adjusting the brightness level and selecting the company’s SmartImage feature.

SmartImage optimizes the display’s contrast to suit what you’re looking at. It has preset modes for Office, Photo, Movie, Game or Economy (which reduces its brightness by two-thirds). There’s also an adjustment for the screen’s color temperature with six settings available between 5,000K and 11,500K.

With the ability to deliver 221 candelas per square meter, the Philips monitor delivered rich blues and sharp yellows, but the display’s greens were too light and its reds appeared dull. Its ability to show video was very good — clear and smooth with no frame drops.

Loading the included Smart Control Premiere app (Windows PCs only) provides a deeper level of customization. It has the ability to change the screen’s black level and adjust the gamma settings. A big bonus is that it has a series of test images that you can use to calibrate the display.
Other features

While the AOC and Dell monitors have built-in speakers, the Philips lacks speakers, webcam, microphone and USB ports. In other words, it is a display and nothing more — rather unusual in today’s market.

Its collection of input ports are oriented vertically rather than the AOC display’s more convenient horizontal ports. The Philips has one DisplayPort, one DVI and one VGA port, but no HDMI port. As a result, I used its DVI input with an HDMI adapter.

The Philips does offer the best stand of the trio. With little effort, the display can be tilted forward 5 degrees and back by up to 20 degrees; it can also be raised or lowered by 6.3 in. and swiveled to the right or left by up to 140 degrees.

The entire display can also be easily rotated from landscape to portrait. This is useful if you want to work with a long document or a vertically oriented website without continually scrolling. The monitor’s software reorients the image after the screen is rotated.

After pressing a button in the back, you can remove the display from the stand, revealing its VESA mounting holes. This allows it to be used with a third-party stand or mounting hardware.
Bottom line

The Philips display comes with a three-year warranty and starts at a retail price of about $260, between the cheaper AOC monitor and the better equipped Dell display. While I love the display’s ability to sense when I’m working and when I’m someplace else — and the well-constructed stand — the Philips really needs some further refinement and power reduction before it’s ready for my office.

After using each of these three monitors for several weeks, I would love an amalgam of the three that is built around the Philips adaptable stand, the AOC’s power-saving abilities and the Dell’s bright screen.

That said, the PowerSensor feature on the Philips 271S4LPYEB is impressive and works well, but it uses too much electricity to be of much use.

I love that the AOC E2752VH doesn’t use a watt when it’s asleep. At $240, it is also the cheapest to get and use, but that’s not enough compensation for having the least bright monitor of the three.

The Dell UltraSharp 27 UZ2715H may not have the fastest display panel, but it is fine for business work and is the best equipped and brightest of the three — and uses a reasonable amount of power. I wish that the stand were more adaptable, but no other screen here does so much.

To see how these 27-in. monitors compare, I set each up in my office for at least a week as my primary display. I used each of them to write emails, edit text, create spreadsheets, watch videos, nose around on the Web and work with interactive online programs.

After unpacking and putting each together, I spent some time measuring and investigating how each stand can tilt, raise or rotate the screen. Then I looked over the display’s ports, speakers, microphone and webcam. I looked at the monitor’s controls and tried out the device’s features.

Then I connected each of the monitors to an iPad Mini (with an HDMI adapter), a Toshiba Radius P-55W notebook and a Nexus 7 phone (connecting via a Chromecast receiver). Each screen was able to work with each source; since the Philips display lacks an HDMI port, I used its DVI port with an HDMI-to-DVI adapter.

I next measured each screen’s brightness with a Minolta LM-1 light meter using a white image in a darkened room. After measuring the light level at nine locations, I averaged them and converted the result to candelas per square meter. I then displayed a standard set of color bars and compared the three displays using an Orei HD104 four-way video distribution amplifier and a Toshiba Radius computer as the source.

To see how these monitors save power, I looked into their power conservation settings and software. I checked out how flexible the setting was for putting the display to sleep and measured how much electricity each monitor used with a Kill a Watt power meter.

Using the average U.S. price of 12 cents per kilowatt-hour of electricity, I estimated of how much it might cost to operate each monitor, based on the assumption that it was used for 10 hours a day over the work year (250 days) and was asleep for the rest of the time.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at


Although degrees and IT certifications can be great eye candy for a resume, experience is king. As you may have encountered, a lack of experience can be a major roadblock to getting interest from employers in your early years.

Though you might have the Network+ or CCNA cert, for instance, have you actually configured or played around with a network? Even if you already have held a network technician or administrator position, you might not have experience with all aspects of networking yet. Fortunately there are ways to get hands-on network administration experience, even at home — and most don’t cost anything.

In this story we discuss nine self-taught labs on various networking topics, where I explain the basics and how to get started. I begin with easier, newbie-level projects and progress to more complex ones requiring more thought and time. Some of the tasks take just a few minutes, while others are suitable for a weekend project. You may want to invest in some networking gear to teach yourself the basics, but there are ways around this.

Project 1: Configure TCP/IP Settings

One of the most basic network admin tasks is configuring the TCP/IP settings. If a network isn’t using the Dynamic Host Configuration Protocol (DHCP) that automatically hands out IP addresses after clients connect, you’ll have to manually set static IP and DNS addresses for each client. You may also be required to temporarily set static IP information when doing the initial setup and configuration of routers or other network components.

To set static IP details you must know the IP address of the router and the IP address range in which you can configure the client. You can figure this out from the settings of a computer already successfully connected to the network.

You’ll need the IP address as well as the Subnet Mask, the router’s IP address (a.k.a. the Default Gateway) and the Domain Name System (DNS) Server addresses.

In Windows: Open the Network Connections via the Control Panel or Network and Sharing Center. Next, open the connection that is already on the network and click the Details button.
In Mac OS X: In System Preferences, click the Network icon, then select the connection that is already on the network, such as AirPort (wireless) or Ethernet (wired). With a wired connection you’ll likely see the info you need on the first screen; for a wireless connection, additionally click the Advanced button and look under the TCP/IP and DNS tabs.

Write the numbers down or copy and paste them into a text file, and then close the window.
What Readers Like

ios 8 problems
iOS 8 problems not so magical: Slow, Laggy, Bloaty, Crashy, Buggy, Drainy and… scarlett johansson naked selfie 2 Naked celebs: Hackers download sext selfies from iCloud #thefappening
iphone 6 size comparison I thought the iPhone 6+ was too big; I was wrong subnet calculator A subnet calculator shows the acceptable IP address range for a network.

To see the acceptable IP address range for the network, you can input the IP address and Subnet Mask into a subnet calculator. For example, inputting the IP of and Subnet Mask of shows the range of to

Even though you now know the IP address range, remember that each device must have a unique IP. It’s best to check which IP addresses are taken by logging into the router, but you could also take an educated guess or simply choose a random address within the range. If the address is already taken by another device, Windows or OS X will likely alert you of an IP conflict and you can choose another. Once the IP address is set, write it down or save it in a document; a best practice is to keep a log of all the static IPs along with the serial numbers of the computers that use them.

TCP/IP settings Manually setting the computer’s IP address.

Now, to set a static IP address:
In Windows: Open the Network Connection Status window, click the Properties button and open the Internet Protocol Version 4 (TCP/IPv4) settings. Choose “Use the following IP address” and enter the settings: an IP address that’s in the acceptable range, plus the Subnet Mask, Default Gateway and DNS Server from the Network Connection Details window.
In Mac OS X: Open the Network window and click the Advanced button. On the TCP/IP tab, click the drop-down next to Configure IPv4, choose Manually and enter an IP address that’s in the acceptable range, plus the Subnet Mask and router address you copied earlier. Go to the DNS tab and enter the DNS Server address you copied before.

As a network admin, you’ll likely help set up, troubleshoot and maintain the wireless portion of the network. One of the most basic tools you should have is a Wi-Fi stumbler. These tools scan the airwaves and list the basic details about nearby wireless routers and access points (APs), including the service set identifier (SSID), also known as the network name; the MAC address of the router/AP; the channel; the signal level; and the security status.

You can use a Wi-Fi stumbler to check out the airwaves at home or at work. For instance, you can check which channels are being used by any neighboring wireless networks so you can set yours to a clear channel. You can also double-check to ensure all the routers or access points are secured using at least WPA or WPA2 security.
NetSurveyor The NetSurveyor stumbler gives a text-based readout and visual charts of wireless channel usage and signals.

Vistumbler and NetSurveyor (for Windows), KisMAC (for OS X) and and Kismet (for both plus Linux) are a few free options that give both text-based readouts and visual charts of the channel usage and signals. Check out my previous review of these and others.
Wifi Analyzer The Wifi Analyzer app provides a nice visualization for channel usage.

If you have an Android phone or tablet, consider installing a Wi-Fi stumbler app on it for a quick, less detailed look at the Wi-Fi signals. Wifi Analyzer and Meraki WiFi Stumbler are two free options. See my previous review of these and others.
Project 3: Play with a wireless router or AP

To get some experience with setting up and configuring wireless networks, play around with your wireless router at home. Or better yet, get your hands on a business-class AP: See if you can borrow one from your IT department, check eBay for used gear or consider buying new equipment from lower-cost vendors such as Ubiquiti Networks, where APs start at around $70.

To access a wireless router’s configuration interface, enter its IP address into a web browser. As you’ll remember from Project 1, the router’s address is the same as the Default Gateway address that Windows lists in the Details window for your wireless network connection.

Accessing an AP’s configuration interface varies. If there’s a wireless controller, it’s the one interface you’ll need to configure all the APs; with controller-less systems you’d have to access each AP individually via its IP address.

Once you’ve accessed the configuration interface of your router or AP, take a look at all the settings and try to understand each one. Consider enabling wireless (or layer 2) isolation if supported and see how it blocks user-to-user traffic. Perhaps change the IP address of the router/AP in the LAN settings and/or for routers, disable DHCP and statically assign each computer/device an IP address. Also consider setting a static DNS address (like from OpenDNS) in the WAN settings. You might also look into the Quality of Service (QoS) settings to prioritize the traffic. When you’re done experimenting, make sure it’s set to the strongest security — WPA2.
typical AP interface I’ve statically assigned this AP an IP address and DNS servers.

If you can’t get your hands on a business-class AP, consider playing around with interface emulators or demos offered by some vendors, as Cisco does with its small business line.

Project 4: Install DD-WRT on a wireless router
For more experimentation with wireless networking, check out the open-source DD-WRT firmware for wireless routers. For compatible routers, DD-WRT provides many advanced features and customization seen only in business- or enterprise-class routers and APs.

For instance, it supports virtual LANs and multiple SSIDs so you can segment a network into multiple virtual networks. It offers a VPN client and server for remote access or even site-to-site connections. Plus it provides customizable firewall, startup and shutdown scripts and supports a few different hotspot solutions.
DD-WRT DD-WRT loads a whole new feature set and interface onto the router.

For more on DD-WRT and help on installing it on your router, see “Teach your router new tricks with DD-WRT.”
Project 5: Analyze your network and Internet traffic

As a network admin or engineer you’ll likely have to troubleshoot issues that require looking at the actual packets passing through the network. Though network protocol analyzers can cost up to thousands of dollars, Wireshark is a free open-source option that works on pretty much any OS. It’s feature-rich, with support for live and offline analysis of hundreds of network protocols, decryption for many encryption types, powerful display filters, and the ability to read/write via many different capture file formats.

Wireshark Wireshark capturing network packets.

Once you get Wireshark installed, start capturing packets and see what you get. In other words, browse around the Web or navigate network shares to see the traffic fly. Keep in mind you can stop the live capturing to take a closer look. Although Wireshark can capture all the visible traffic passing through the network, you may see only the traffic to and from the client whose packets you’re capturing packets if the “promiscuous” mode isn’t supported by your OS and/or the network adapter. (For more information, see the Wireshark website.)

Note: Even though packet capturing is usually only a passive activity that doesn’t probe or disturb the network, some consider monitoring other people’s traffic a privacy or policy violation. So you don’t get into trouble, you ought to perform packet capturing only on your personal network at home — or request permission from management or the CTO before doing it on your work network. In fact, you should clear it with management before doing any monitoring or analysis of a company or school network.

There are other free network analyzers you might want to experiment with. For instance, the EffeTech HTTP Sniffer can reassemble captured HTTP packets and display a Web page, which can visually show you or others what’s captured rather than looking at the raw data packets. Password Sniffer “listens” just for passwords on your network and lists them, which shows just how insecure clear-text passwords are. And for mobile analysis via a rooted Android phone or tablet, there are free network analyzers like Shark for Root.

Project 6: Play with network emulators or simulators
Though you might not be able to get your hands on enterprise-level network gear for practicing, you can use emulators or simulators to virtually build and configure networks. They can be invaluable tools for preparing for IT certifications, including those from Cisco and Juniper. Once you create virtual network components and clients you can then configure and administer them with emulated commands and settings. You can even run network analyzers like Wireshark on some, to see the traffic passing through the network.

Here are a few of the many emulators and simulators:
The GNS3 Graphical Network Simulator is a popular free and open source choice. It requires you to supply the OS, such as Cisco IOS or Juniper’s Junos OS, which usually requires a subscription or support contract from the particular vendor, but you may be able to get access via the IT department at work or school.

GNS3 interface GNS3 Graphical Network Simulator supports Cisco IOS/IPS/PIX/ASA and Juniper JunOS.

Netkit is another free and open source option. It doesn’t include vendor-specific functionality and is limited to generic networking components, but it also doesn’t require you to have the OSes as GNS3 does.

The Boson NetSim Network Simulator is a commercial offering with pricing starting at $99; its intent is to teach Cisco’s IOS. It offers a free demo download, but that functionality is greatly limited.

There are also websites, such as SharonTools and Open Network Laboratory, that offer remote admin access to network components and Web-based emulators for you to practice with commands. Network World has a nice roundup of free Cisco simulators and emulators.

Project 7: Perform penetration testing on your own network
You can read and read about network security, but one of the best ways to learn about or to verify security is by penetration testing. I don’t mean you should snoop on your neighbors or hack a business; try it your own network so you don’t end up in the slammer.

Perhaps find a network vulnerability that interests you, research how to take advantage of it and, once you’ve done the hack, make sure you understand how it was possible. And always ensure your network and those you administer are protected from the vulnerability.

Here are a few hacks you could try:
Crack Wi-Fi encryption — WEP is the easiest — with Aircrack-ng. Crack a Wi-Fi Protected Setup (WPS) registrar PIN with Reaver-WPS to gain access to a wireless router.
Hijack online accounts via Wi-Fi using the Firefox add-on Firesheep or the Android app DroidSheep.
Capture and crack 802.1X credentials using FreeRadius-WPE.

When researching, you’ll likely find how-to tutorials on exactly how to do the hacks and what tools you need. One popular tool that’s filled with hundreds of penetration testing tools is the BackTrack live CD, but the project is currently not maintained. However, Kali Linux is a similar tool that has emerged; it can be installed on a computer or virtual machine or run via live CD or USB.

If you find you like penetration testing, perhaps look into becoming an Ethical Hacker. Project 8: Set up a RADIUS server for enterprise Wi-Fi security

At home you likely encrypt your wireless router with the Personal or Pre-shared Key (PSK) mode of WPA or WPA2 security to keep others off the network and to prevent them from snooping on your traffic. The Personal mode is the simplest way to encrypt your Wi-Fi: Set a password on the router and simply enter it on the devices and computers you connect.

Businesses, however, should use the Enterprise mode of WPA or WPA2 that incorporates 802.1X authentication. This is much more complex than the Personal mode but provides better protection. Instead of a global Wi-Fi password, each user receives his or her own login credentials; the encryption protects against user-to-user snooping. Plus you can change or revoke individual login credentials to protect the network when an employee leaves or a device becomes or lost or stolen.

To use the enterprise mode you must have a separate Remote Authentication Dial-In User Service (RADIUS) server to handle the 802.1X authentication of users. As a network admin you’ll likely have to configure and troubleshoot clients with 802.1X authentication and help maintain the RADIUS server. For practice, consider setting up your own server and using enterprise-level Wi-Fi security on your home network.

If you’re working on a network that has a Windows Server, the Network Policy Server (NPS) or Internet Authentication Service (IAS) component can be used for the RADIUS server. But if not, you have a couple of free options. If you want some Linux experience, consider the open-source FreeRADIUS. Some easier-to-use options that include a Windows GUI are the freeware TekRADIUS and the 30-day free trials of commercial products like ClearBox. In a previous review, I evaluated these and other low-cost RADIUS servers.

Once you have the RADIUS server installed, create user accounts and input shared secrets (passwords) for the APs. Then configure the wireless router or APs with WPA/WPA2-Enterprise: Enter the RADIUS server’s IP and port, and the shared secret you defined on the RADIUS server. Then you can connect clients by entering the login credentials you defined on the RADIUS server.

Here are a few previous articles you may want to check out: 6 secrets to a successful 802.1X rollout, and Tips for troubleshooting 802.1X connections, and Lock Down Your Wi-Fi Network: 8 Tips for Small Businesses.
Project 9: Install Windows Server and set up a domain

As a network admin you’ll likely manage Microsoft-based networks running Windows Server. To gain more experience, consider running Windows Server at home.

Although purchasing a copy of a server edition isn’t feasible just for tinkering around with, there are some free options. Microsoft provides 180-day free trials via a downloadable ISO for installing on a physical machine, virtual hard drive (VHD) for running on a virtual machine, and access to a pre-configured virtual machine on the Windows Azure cloud. Plus the company offers Virtual Labs — guided tutorials in a virtual environment — that you might want to check out.

Once you get access to a server, discover and experiment. Perhaps configure Active Directory and play with Group Policies, set up Exchange and configure an Outlook client, or set up NPS for 802.1X authentication.
Next steps

If you found these projects useful in learning and getting experience, keep in mind there are many more self-taught labs out there online. Try searching for labs on the specific certifications you’re interested in or the network vendors you’d like to administer.

Though virtual environments and emulators provide a quick and easy way to get hands-on experience, also try to get as much time as you can with the real gear. Ask the IT department if you can borrow any spare equipment, and take advantage of any other chances you spot to get real-world experience.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at



The devices connected to your router battle for bandwidth like thirst-crazed beasts jostling for access to a receding watering hole. You can’t see the melee, but you can feel its impact. Without intervention, the strongest competitors—a BitTorrent download, for instance—will drink their fill, even if it’s not essential to their survival, while others—a VoIP call, a Netflix stream, or a YouTube video—are left to wither and die.

Data integration is often underestimated and poorly implemented, taking time and resources. Yet it

A router with good Quality of Service (QoS) technology can prevent such unequal distribution of a precious resource. You can dip only one straw into the Internet at a time, after all. QoS ensures that each client gets its chance for a sip, and it also takes each client’s specific needs into account. BitTorrent? Cool your jets. If one of your packets is dropped, it’ll be resent. You can run in the background. Netflix, VoIP, YouTube? Lag results in a bad user experience. Your data gets priority.

That’s a gross oversimplification, of course. Here’s a more in-depth explanation. QoS, also known as traffic shaping, assigns priority to each device and service operating on your network and controls the amount of bandwidth each is allowed to consume based on its mission. A file transfer, such as the aforementioned BitTorrent, is a fault-tolerant process. The client and the server exchange data to verify that all the bits are delivered. If any are lost in transit, they’ll be resent until the entire package has been delivered.

That can’t happen with a video or audio stream, a VoIP call, or an online gaming session. The client can’t ask the server to resend lost bits, because any interruption in the stream results in a glitch (or lag, in terms of game play). QoS recognizes the various types of traffic moving over your network and prioritizes it accordingly. File transfers will take longer while you’re watching a video or playing a game, but you’ll be assured of a good user experience.

Traditional QoS

Different routers take different approaches to QoS. With some models, you simply identify the type of traffic you want to manage and then assign it a priority: High, medium, or low. With others, you can choose specific applications, or even identify the specific ports a service or application uses to reach the Internet. Yet another way is to assign priority to a specific device using its IP or MAC address.

Router Quality of Service QoS

Many older routers, such as this Netgear WNR2000 802.11n model, have predefined Quality of Service for a limited number of applications, but you must configure your own rules for anything the manufacturer didn’t think of.

Configuring QoS this way can be very cumbersome, requiring lots of knowledge of protocols, specific details about how your router operates, and networking in general. Some routers, for instance, depend on you to inform them of the maximum upload and download speeds your ISP supports. Enter the incorrect values, and your network might perform worse instead of better.

Fortunately, router manufacturers have made great strides in making QoS easier to configure. In some cases, it’s become entirely automatic.

Intelligent QoS

Some routers include the option of automated QoS handling. Most newer models support the Wi-Fi Multimedia (WMM) standard, for instance. WMM prioritizes network traffic in four categories, from highest to lowest: Voice, video, best effort (most traffic from apps other than voice and video), and background (print jobs, file downloads, and other traffic not sensitive to latency). WMM is good as far as it goes, but it ameliorates only wireless network contention. It does nothing to resolve the battle for bandwidth among wired network clients.

Better routers go further to cover both sides of the network. They automatically choose which traffic gets priority based upon assumptions—putting video and voice ahead of file downloads, for instance. The intelligence behind each vendor’s QoS functionality, however, varies according to the quality of the algorithm in use and the processor power available to run it.

Router Quality of Service QoS

Qualcomm’s StreamBoost technolog enables the the D-Link DGL-5500 to display exactly what’s consuming the majority of your network’s bandwidth.

Right now, Qualcomm’s StreamBoost traffic-shaping technology seems to be the hot QoS ticket. StreamBoost, first announced in January, 2013, is based on technology originally developed by Bigfoot Networks. Bigfoot, a company that Qualcomm acquired in 2011, designed network-interface cards targeted at gamers, who are among the most latency-sensitive computer users in the world.

Qualcomm doesn’t manufacture routers, but the company does design and manufacture processors that go into high-end consumer routers such as Netgear’s Nighthawk X4 and D-Link’s DGL-5500 Gaming Router. While there’s no technological barrier to running StreamBoost on a Marvel or Broadcom processor, Qualcomm currently doesn’t license the firmware separate from its chips.

StreamBoost can distinguish between and prioritize latency-sensitive traffic (audio, video, gaming, and so on) over latency-insensitive traffic (downloads, file transfers, etc.), and it can adjust its allocation of bandwidth to various network activities to ensure all clients get a good experience. If several clients are streaming Netflix videos at the same time, for instance, it can automatically reduce one or more of those streams from 1080p quality to 720p quality to ensure all the sessions have enough bandwidth.

What’s more, StreamBoost can distinguish among the types of client devices and reduce the image quality streaming to a smartphone or tablet, because the degradation won’t be as noticeable on those small screens as it would be on a big-screen smart TV.

Router Quality of Service QoS

StreamBoost lets you assign priorities to client PCs, so you can preserve bandwidth for a smart TV at the expense of a PC used for BitTorrent downloads, for instance.

StreamBoost’s bandwidth graphs and tools provide better visibility and more precise tuning than other QoS tools I’ve seen. And if you opt in to participate, you’ll receive ongoing updates from Qualcomm’s database in the cloud so that your router can continually optimize its performance and learn how to handle new devices that come on the market. StreamBoost support alone won’t make a crappy router great, but it can make a difference.

Don’t stop with QoS

Good Quality of Service is essential if you use your network to stream video, play online games, make VoIP and Skype calls, or watch YouTube (and if you don’t do any of those things, you wouldn’t have clicked on this story in the first place). The performance benefits you’ll realize might even save you from moving up to a pricier service tier with your ISP.

Linksys WRT1900AC Wi-Fi router
An 802.11ac router can deliver higher performance even with clients that are equipped with 802.11n adapters.

But there are other things you can do beyond traffic shaping. Perform a site survey using a tool such as Kismet to see which radio channels your neighbors are relying on, and configure your router to use something else. There are only three non-overlapping channels in the 2.4GHz frequency band: 1, 6, and 11. Use one of these if possible.

If you have a dual-band router that supports both the 2.4- and 5GHz frequency bands, use the less-crowded higher frequency for latency-sensitive traffic such as media streaming, and reserve 2.4GHz for things like downloads. There are many more non-overlapping channels at 5GHz, and the higher channels—150 and up—support more bandwidth than the lower channels.

Lastly, if you’re using an 802.11n (or older) router, consider moving up to a model based on the newer 802.11ac standard. Even if your clients are stuck with 802.11n adapters, you’ll still see a significant performance boost with an 802.11ac router.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at



Google is on track to spend more money this year attempting to influence lawmakers than any other tech company

Google and Facebook continued to pour millions of dollars into political lobbying in the third quarter in attempts to influence U.S. lawmakers and have legislation written in their favor.

Google spent $3.94 million between July and September while Facebook spent $2.45 million, according to disclosure data published Tuesday.

The only tech-related company to outspend Google was Comcast, which is trying to persuade politicians to look favorably on a merger with Time Warner and spent $4.23 million during the quarter.

But Google stands as the largest spender in the entire tech industry to date this year. It has run up a $13 million bill lobbying Washington politicians and their offices on a range of issues as diverse as online regulation of advertising, cybersecurity, patent abuse, health IT, international tax reform, wind power and drones.

If industry spending continues at its current level, 2014 will mark the fourth year that Google has spent more money on federal lobbying than any other technology company.

Facebook began lobbying Washington in 2009 and has quickly risen to become the fourth-largest spender in the tech industry so far this year, behind Google, Comcast and AT&T.

The company’s lobbying hits an equally diverse range of areas including cyber breaches, online privacy, free trade agreements, immigration reform, Department of Defense spending and intellectual property issues.

Another notable spender in the third quarter was Amazon, which plowed $1.18 million into its lobbying efforts. That represents a quarterly record for the Seattle company and is the second quarter in a row that it has spent more than $1 million on lobbying.

Amazon’s lobbying was aimed at many of the same areas targeted by Google and Facebook, but covered additional subjects close to its business, including postal reform, online wine sales, mobile payments and Internet tax payments.

The money is funneled to D.C. lobbying firms that use it to push their clients’ agendas to politicians and their staffers. The lobbying disclosure reports are published quarterly by the U.S. Senate and detail spending in general areas, but do not go into specifics.

Lobbying has long been an effective tool used by major companies, but it’s only been in the last few years that Internet companies have started spending money in amounts to rival traditional tech giants.

During the third quarter, other major spenders included Verizon ($2.91 million), CTIA ($1.95 million), Microsoft ($1.66 million) and Oracle ($1.2 million).

Apple spent just over $1 million in the quarter lobbying on issues including consumer health legislation, transportation of lithium ion batteries, international taxes, e-books, medical devices and copyright.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

10 Tips to Ensure Your IT Career Longevity

Written by admin
October 19th, 2014

Enjoying a long career doesn’t happen by accident. It takes planning and effort. Use these tips to get your head in the game and keep your eye on the future.

Many people say that IT and technology are a young man’s game, and if you look at most influential tech companies you might agree. Most IT workers employed at those companies there are under 35 and male. However, these big name firms employ only a fraction of tech professionals and there are plenty of opportunities out there for everyone. IT has one of the lowest unemployment rates of any industry because in most organizations technology touches every part of the business.

Be Responsible for Your Own Career
Achieving career longevity in the IT business takes time effort, time and resources — and nobody but you can organize, facilitate and be responsible for all of it. To stay ahead of the learning curve you need to think about your goals and architect your future.

Many organizations are getting better at providing embedded employee performance and career management processes, according to Karen Blackie, CIO of Enterprise Systems & Data for GE Capital. However, she warns that you are your own best advocate and should always strive to “own” your career. Don’t wait for your organization to do it for you because that day may never come.

This means stepping back and thinking about where you want to be in X amount of time and then outlining the different skills and experience needed to get there. With that information you can start mapping out your career. “Doing research into what interests you, setting goals and objectives and then having a plan around how you will accomplish those goals is very important,” says Blackie. Remember positions get eliminated and things don’t always work out so it’s wise to consider alternate paths.

Flexibility and Agility Required
Technology moves at an unprecedented pace, which means you’ve got to be flexible. “Adaptability is key. CIOs who can’t adapt to that change will see themselves – unfortunately – left behind in a competitive job market. But the CIOs who see each new change – whether mobile, BYOD, Cloud, IoT – as an opportunity are the technology executives who will continue to be in demand– because they’ve proven that they can leverage new solutions to drive business value, “says J.M. Auron, IT executive resume writer and president of Quantum Tech Resumes.
Learn About the Business

“Having the business knowledge is a key foundational element to one’s career, ” says GE Capital’s Blackie. Being a great developer isn’t enough if you plan to climb the corporate ladder. You’ve got to understand your industry and how your company does business. This kind of data can also help you be a better programmer. By better understanding the business needs it will help you deliver products, software and services that better align with the business.

Always Be Learning
The price of career longevity in the world of IT and technology is constant learning. If you aren’t passionate about it or you’re complacent, it’s easy to find yourself locked into outdated technology and left behind. There are many ways to stay current like a formal college environment or a certification course for example. “It is your career and it is up to you to keep educating yourself,” says Robert P. Hewes, Ph.D., senior partner with Camden Consulting Group, with oversight for leadership development and management training.

Professional organizations, conferences, developer boot camps and meet-ups are all great ways to stay abreast in the newest technologies and build network connections within your industry. “It’s often a place where you develop life-long friends and colleagues, “says Blackie.

Attend Industry Conferences

Industry conferences are great way to learn about the newest trends in technology as well as network with like-minded people who hold similar interests. Be selective about which conferences you attend and make sure you allot the necessary time to socialize and network with your peers.

“One mistake attendees often make at conferences is filling their schedule so tightly with panels that they miss out on the networking available during downtime. It’s important to attend mixers and informal gatherings at conferences to meet your peers and build relationships that could last throughout your career,” says Blackie.

Incorporate Time into Your Day for Reading
Set up a little time each day to stay current with the goings-on in your part of technology and beyond. “Become a regular reader of info in your industry, be it an industry journal or an online blog/magazine. There is a lot of information out there. Another quick way to find relevant information is via an aggregator, Pocket and LinkedIn do this,” says Hewes.

Google News and a host of other news aggregators like LinkedIn Pulse or Reddit offer a daily stream of news and with alerts and notifications that allow users to focus on key areas of interest.

Pay Attention to Competitors
“It’s important to get to know industry competitors and watch what they’re doing. You can learn a lot from the successes and failures of your competitors,” says Blackie. Being first isn’t always required to be successful. Doing it better than the next guy is, however. Find your competitors as well as organizations that you think are thought leaders in your industry and follow them in social media or create a Google Alert for them.

Find a Mentor or Coach

Mentoring is useful at all levels of one’s career. A mentor can help you negotiate internal politics or provide insight into how to solve lingering problems. You may also have different mentors throughout your career, each offering a different perspective or expertise.

Understand the Value of Social Media
Not everyone adores social media, but it’s a necessary element in the race to separate you from the rest of IT professionals. Build and maintain profiles on relevant social media sites and then use them to explain the value proposition you offer.

Work on Soft Skills and Some Not-so-Soft Ones

Branding Skills

Branding is what help separates you from the rest of the pack and explains what your value proposition is to your employer or prospective employers. “Branding is another key for career advancement – and one that few technology leaders have fully embraced. Giving thought to that brand is key for career longevity and advancement,” Auron says.

Communication Skills

According to Auron, the ability to find the right path, communicate value and build enthusiasm is a crucial step in transforming the perception of IT from that of a cost center to that of a business enabler. “The most critical skill is the ability communicates the real value of technology investment to nontechnical leadership. Some technologists can fall into one of two traps: giving so much detail that the audience’s eyes glaze over or, appearing patronizing when intelligent – but nontechnical leaders – don’t get a specific reference,” Auron says.

Project Management Skills

At some point in your technology career you will be asked to lead a project. When the time comes make sure you’ve got the necessary tools. “It is critical if you are headed onto the management track. In fact, you should try to gain wide experience with all kinds of projects,” says Hewes.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

8 headline-making POS data breaches

Written by admin
October 14th, 2014

The rash of data breaches in the US through POS terminals has many looking the Chip and PIN model used in Europe.

POS swipes
Next month marks an unceremonious anniversary of the 1960s-vintage “swipe-and-signature” magnetic stripe card system. To some the end of this point-of-sale terminal is not a moment too soon. While talks continue on implementing the “chip and PIN” system, we look back at the most recent incidents involving stealing information from these devices.

Home Depot
Data thieves were the ones following The Home Depot’s credo of “More doing,” as they did more doing with other people’s money. The Home Depot reported a data breach earlier in September. The home improvement store confirmed that 2,200 stores were compromised in U.S and Canada. The number of credit cards affected may have reached 56 million.

Around 70 million customers found coal in their stockings when Target was targeted by thieves just before this past Christmas.

The arts and craft chain reported more than 3 million credit cards affected by identity theft. This was a big one-two punch with Target’s breach occuring just a month earlier.

Beef O’Brady
Customers have a beef with the Beef O’Brady restaurants when in early September they were the victim of a data breach to the Florida chain’s point of sale system. There have been no reports yet how many cards were affected by the breach.

Dairy Queen
Dairy Queen was hit with a blizzard of credit card numbers stolen at the end of August. According to Dairy Queen has reported a data breach of their POS (Point of Sale) system when malware authorities are calling “Backoff” was found on the system. Currently the restaurant chain is unclear as to how many stores were affected.

PF Changs
CSO’s Dave Lewis reported: On June 10th 2014 the staff at PF Chang’s received a visit that they didn’t want to have come knocking on their door. The US Secret Service came calling to alert the restaurant chain that they had been compromised by a criminal gang that was stealing data from their point of sale systems.

Shaws and Star Market
Shaw’s and Star Market have more than 50 grocery stores in Massachusetts but there has been no report on how many credit cards were ripped off.

TJ Maxx
At over 45 million, the 2007 data breach at TJ Maxx is the grand daddy of them all. According to Computerworld: In filings with the U.S. Securities and Exchange Commission yesterday, the company said 45.6 million credit and debit card numbers were stolen from one of its systems over a period of more than 18 months by an unknown number of intruders.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at



Rise of smart machines, ubiquitous access and software-defined architectures will reshape IT, Gartner says

ORLANDO—Gartner defines its Strategic Technology Trends as those technologies that have the most potential to drive great change in the enterprise IT arena in the next three years.

Indeed this year’s crop has that potential as trends like software-defined networks and 3D printing take center stage in Gartner’s list.

“You need to be looing at linking to customers in new and unique ways; what technologies set the foundation to enable these moves,” said Gartner vice president David Cearly. IT will be dealing with everything from virtual technologies to intelligent machines and analytics data everywhere, he said. “And in the end all things run through a completely secure environment.”

So Gartner’s Top 10 Strategic Technology Trends for 2015 list looks like this:
1. Computing everywhere: Cleary says the ithe trend is not just about applications but rather wearable systems, intelligent screens on walls and the like. Microsoft, Google and Apple will fight over multiple aspects of this technology. You will see more and more sensors that will generate even more data and IT will have to know how to exploit this—think new ways to track users and their interactions with your company—in an effective, positive way.

2. The Internet of things: Yes this one is getting old it seems, but there’s more to it than the hype. Here IT will have to manage all of these devices and develop effective business models to take advantage of them. Cearly said IT needs to get new projects going and to embrace the “maker culture” so people in their organizations can come up with new solutions to problems.

3. 3D Printing: Another item that has been on the Gartner list for a couple years. But things are changing rapidly in this environment. Cearly says 3D printing has hit a tipping point in terms of the materials that can be used and price points of machines. It enables cost reduction in many cases. IT needs to look at 3D printing and think about how it can make your company more agile. C an it 3D printing drive innovation?

4. Advanced, Pervasive and Invisible Analytics: Security analytics are the heart of next generation security models. Cearly said IT needs to look at building data reservoirs that can tie together multiple repositories which can let IT see all manner of new information – such as data usage patterns and what he called “meaningful anomalies” it can act on quickly.

5. Context-Rich Systems: This one has been a Gartner favorite for a long time – and with good reason. The use of systems that utilize “situational and environmental information about people, places and things” in order to provide a service, is definitely on the rise. IT needs to look at creating ever more intelligent user interfaces linking lots of different apps and data.

6. Smart Machines: This one is happening rapidly. Cearly pointed to IBM’s Watson, which is “learning” to fight cancer, and a mining company – Rio Tinto—which is using automated trucks in its mines. Virtual sages, digital assistants and other special service software agents will about in this world, he said.

7. Cloud/Client Computing: This trend was on last year’s list as well but Gartner says the need to develop native apps in the cloud versus migrating existing apps is the current issue.

8. Software-Defined Applications and Infrastructure: In order to get to the agility new environments demand we cannot have hard codes and predefined networks, Cearly said. IT needs to be able construct dynamic relationships. Software Defined technologies help on that scale.

9. Web-Scale IT: This trend remains pretty much the same as last year. Gartner says Web-scale IT is a pattern of global-class computing technologies that that deliver the capabilities of large cloud service providers. The likes of Amazon, Google and others are re-inventing the way IT services can be delivered. Still requires a cultural IT shift to be successful.

10. Risk-Based Security and Self-protection: Cearly said all roads to the digital future success lead through security. Trends here include building applications that are self-protecting.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at


From electronic pills to digital tattoos, these eight innovations aim to secure systems and identities without us having to remember a password ever again

8 cutting-edge technologies aimed at eliminating passwords
In the beginning was the password, and we lived with it as best we could. Now, the rise of cyber crime and the proliferation of systems and services requiring authentication have us coming up with yet another not-so-easy-to-remember phrase on a near daily basis. And is any of it making those systems and services truly secure?

One day, passwords will be a thing of the past, and a slew of technologies are being posited as possibilities for a post-password world. Some are upon us, some are on the threshold of usefulness, and some are likely little more than a wild idea, but within each of them is some hint of how we’ve barely scratched the surface of what’s possible with security and identity technology.

The smartphone
The idea: Use your smartphone to log into websites and supply credentials via NFC or SMS.

Examples: Google’s NFC-based tap-to-unlock concept employs this. Instead of typing passwords, PCs authenticate against the users phones via NFC.

The good: It should be as easy as it sounds. No interaction from the user is needed, except any PIN they might use to secure the phone itself.

The bad: Getting websites to play along is the hard part, since password-based logins have to be scrapped entirely for the system to be as secure as it can be. Existing credentialing systems (e.g., Facebook or Google login) could be used as a bridge: Log in with one of those services on your phone, then use the service itself to log into the site.

The smartphone, continued
The idea: Use your smartphone, in conjunction with third-party software, to log into websites or even your PC.

Examples: Ping Identity. When a user wants to log in somewhere, a one-time token is sent to their smartphone; all they need to do is tap or swipe the token to authenticate.

The good: Insanely simple in practice, and it can be combined with other smartphone-centric methods (a PIN, for instance) for added security.

The bad: Having enterprises adopt such schemes may be tough if they’re offered only as third-party products. Apple could offer such a service on iPhones if it cared enough about enterprise use; Microsoft might if its smartphone offerings had any traction. Any other takers?

The idea: Use a fingerprint or an iris scan — or even a scan of the vein patterns in your hand — to authenticate.

Examples: They’re all but legion. Fingerprint readers are ubiquitous on business-class notebooks, and while iris scanners are less common, they’re enjoying broader deployment than they used to.

The good: Fingerprint recognition technology is widely available, cheap, well-understood, and easy for nontechnical users.

The bad: Despite all its advantages, fingerprint reading hasn’t done much to displace the use of passwords in places apart from where it’s mandated. Iris scanners aren’t foolproof, either. And privacy worries abound, something not likely to be abated once fingerprint readers become ubiquitous on phones.

The biometric smartphone
The idea: Use your smartphone, in conjunction with built-in biometric sensors, to perform authentication.

Examples: The Samsung Galaxy S5 and HTC One Max (pictured) both sport fingerprint sensors, as do models of the iPhone from the 5S onwards.

The good: Multiple boons in one: smartphones and fingerprint readers are both ubiquitous and easy to leverage, and they require no end user training to be useful, save for registering one’s fingerprint.

The bad: It’s not as hard as it might seem to hack a fingerprint scanner (although it isn’t trivial). Worst of all, once a fingerprint is stolen, it’s, um, pretty hard to change it.

The digital tattoo
The idea: A flexible electronic device worn directly on the skin, like a fake tattoo, and used to perform authentication via NFC.

Examples: Motorola has released such a thing for the Moto X (pictured), at a cost of $10 for a pack of 10 tattoo stickers, with each sticker lasting around five days.

The good: In theory, it sounds great. Nothing to type, nothing to touch, (almost) nothing to carry around. The person is the password.

The bad: So far it’s a relatively costly technology ($1 a week), and it’s a toss-up as to whether people will trade typing passwords for slapping a wafer of plastic somewhere on their bodies. I don’t know about you, but even a Band-Aid starts bothering me after a few hours.

The password pill
The idea: This authentication technology involves ingesting something into your body — an electronic “pill” that can send a signal of a few bits through the skin.

Examples: Motorola demonstrated such a pill last year, one produced by Proteus Digital Health normally used for gathering biometrics for patient care (pictured).

The good: A digital pill makes the authentication process completely passive, save for any additional manual authentication (e.g., a PIN) that might be used.

The bad: Who is comfortable (yet) with gulping down a piece of digital technology? Like the digital tattoo, this doesn’t sound like something one would want to use regularly, but rather more as a day pass or temporary form of ID.

Voice printing
The idea: Use voice recognition to authenticate, by speaking aloud a passphrase or a text generated by the system with which you’re trying to authenticate.

Examples: Porticus, a startup profiled back in 2007, has an implementation of this technology (“VoiceKeyID”), available for multiple mobile and embedded platforms.

The good: The phrase used to identify you isn’t the important part; it’s the voice itself. Plus, it can be easily changed; speaking is often faster than typing or performing some other recognition; and it’s a solution that even works in a hands-free environment. Plus, microphones are now standard-issue hardware.

The bad: As with any technology that exists in a proprietary, third-party implementation, the hard part is getting people to pick up on it.

Brainwave authentication
The idea: Think your password and you’re logged in. That’s right: an authentication system that uses nothing but brainwaves.

Examples: A prototype version of the system, using a Bluetooth headset that contained an EEG sensor, has been demonstrated by folks at the University of California Berkeley School of Information. The “pass-thoughts” they used consisted of thinking about some easily memorized behavior, e.g., moving a finger up and down.

The good: Consumer-grade EEG hardware is cheap, and the tests conducted by the School of Information showed it was possible to detect a thought-out password with a high degree of accuracy.

The bad: Donning a headset to log in seems cumbersome — that is, assuming you’re not spooked by the idea of a computer reading your thoughts.


Best Microsoft MCTS Training – Microsoft MCITP Training at

9 Rules for the Developer-ization of IT

Written by admin
September 15th, 2014

CIOs who want to drive innovation and change in their organizations should focus on making the lives of developers easier so they can innovate, produce great apps and deliver valuable IP.

The acceptance of SaaS, the cloud and other easily accessible technologies allow the lines of business to drive innovation without necessarily turning to IT for help.

While the CIO can take back some of that ground by becoming a broker and orchestrator of services, the real key to driving innovation and change today is app development, according to Jim Franklin, CEO of SendGrid, a largest provider of email infrastructure as a service.

Franklin says that much like the consumerization of IT that has been underway for several years, CIOs now need to embrace the “developer-ization of IT,” which is about allowing developers to focus on innovating, producing great apps and delivering valuable IP.

Rule 1: Embrace the Public Cloud
The public cloud offers developers access to scalable, flexible infrastructure. With it, they can scale up as necessary while consuming only what they need. The efficiencies created by the scale at which public clouds operate are something you just can’t replicate on your own.

“There’s no need to reinvent the wheel by building servers, storage and services on your own,” Franklin says. “This will shave precious time off of project schedules, reduce time to market and lower costs significantly.

Rule 2: Adopt Enterprise Developer Marketplaces
Access to marketplaces full of enterprise-ready tools and APIs will allow your developers to build better applications faster.

“Embrace the new breed of enterprise developer marketplaces,” Franklin says. “Give developers access to more tools that are enterprise-ready. An emerging set of marketplaces from Windows Azure, Heroku and Red Hat provide a variety of new tools and services to ramp up application development productivity.”

Rule 3: Forget Long-Term Contracts for Tools and Services
A long-term contract for a service or tool may make financial sense, but can be a barrier to developer efficiency and agility. Instead, make it as easy as possible for developers to self-select the best tool for the job at hand.

“The nature of application development can be very transitory at times,” Franklin says. “Developers may need one service or tool one day and then pivot on to something else the next and they like to try and test tools before they make a financial commitment. Make the process of using different tools and vendors frictionless for them so they can self-select the tools they want. Long-term contracts impede this since approvals are needed from procurement or legal and this can draw out the process.”

Rule 4: Recognize that Developers Speak Their Own Language
When trying to communicate with developers — whether you’re trying to attract talent, project manage or target them for sales — tailor your messages for a highly technical audience and use the communication channels they’re comfortable with. This could mean user forums, hackathons or social media.

“The key is trying to be very flexible and open,” Franklin says. “Hackathons have really become a strong trend because of the power of letting people show their creativity in a lot of different ways.”

Rule 5: Give Developers Freedom With Controls
Creative solutions require the ability to experiment freely. Embrace that, but also put some controls in place for your own peace of mind. Franklin suggests deploying API management solutions and monitoring tools so IT can have a window in the traffic flowing through the network and can ensure security measures are taken into account.

Rule 6: Don’t Get Locked Down by a Platform or Language
Encourage your developers to build apps that are platform agnostic across Web, mobile and for Internet of Things devices from the start. Designing apps to be platform agnostic in the first place can save developers a lot of grief in the long-term.

“Rather than building for the Web and then adding a mobile extension later, developers should keep this in mind at the start of the app development process,” Franklin says. “If an application has a physical aspect to it, developers should be encouraged to define, deploy, communicate and manage the IoT application in a scalable fashion from the start.”

Rule 7: Give Developers Space to Pursue Their Own Projects
Developers are creative people with a natural love for making things. They’ll be happiest if you provide them with a collaborative, creative outlet for their own projects, and they may just solve an intractable problem in the process.”

“While the Google 20 percent idea — where employees used to take one day a week to work on side projects — may not work for everyone, understand that developers have an inherent desire to share new tools, hacks, shortcuts and passion projects with their peers,” Franklin says. “Give them time to do this at work. Some of these ideas may end up in your products. Gmail, AdSense and Google Hangouts, for example, all started as side projects of Google employees.”

Rule 8: Set Standards for Coding in RESTful, Modern APIs
Issue a set of best practices related to usable standards like REST. This, Franklin says, will allow developers to more rapidly build applications that access and act upon data exposed via APIs, even in environments with unreliable network speeds and limited computing power.”

REST also makes it easy for humans to understand what’s being exchanged while allowing computers to talk to one another efficiently,” Franklin adds.

Rule 8: Set Standards for Coding in RESTful, Modern APIs
Issue a set of best practices related to usable standards like REST. This, Franklin says, will allow developers to more rapidly build applications that access and act upon data exposed via APIs, even in environments with unreliable network speeds and limited computing power.”

REST also makes it easy for humans to understand what’s being exchanged while allowing computers to talk to one another efficiently,” Franklin adds.


Best Microsoft MCTS Training – Microsoft MCITP Training at


Guys, stop creeping out women at tech events

Written by admin
September 10th, 2014

Not realizing that your behaviors constitute harassment is no excuse

I go to a lot of security conferences, but I never gave much thought to this curious fact: The conferences are hardly ever headlined by women. In fact, not a lot of women attend security conferences and other tech events.

I guess, if I noticed this at all, I chalked it up to the general dearth of women in the technology field. But the scarcity of women at events goes beyond that, and my eyes have only recently been opened to this fact and the reality that explains it.

My awakening began through my role as president of the Information Systems Security Association. I’ve been leading the creation of special interest groups (SIG) with the goal of making ISSA a more virtual organization that could bring together people with common technical interests from around the world. I was somewhat surprised that common technical interests were not driving the most energetic SIG, Women in Security.

I was curious. Why were women so interested in banding together? As I started asking questions, it began to make sense. And I found in all this a message for men.

IT guys, women are uncomfortable around us. Enough of us are acting like creeps around them that they would rather not join us in large groups. Even in a virtual setting like SIGs, they would rather get together with other women.

My questioning revealed to me that women are harassed on a regular basis at professional events. The harassment is often minor, but there have been cases of physical assault and even rape. Part of the problem is that to too many men in IT, a lot of the minor harassment incidents at tech events — men putting their arms around women’s shoulders, men hitting on women, men telling women graphic details about their sexual exploits — sound like no big deal.

Men who feel that way are not adept at empathy. They do not see that sexual overtures made to a woman are threatening in a way that sexual overtures made to a man rarely are. They do not imagine that a woman might need to maintain a personal physical space in order to feel safe. They might not even understand that hitting a woman on the buttocks is physical assault.

What everyone needs to understand is that even minor harassment is a significant reason for women to avoid tech events, networking opportunities and conferences. Maybe you would be flattered to be propositioned by a woman at an event. That doesn’t mean that a woman is going to feel the same way. Most women don’t like it. Especially in a professional setting. And even more so when she is one of very few women in a room full of men who are leering at her as if she were a zebra walking through a pride of lions.

Guys, you know what it means to be professional, don’t you? And you know that a professional conference is not the same as a pickup bar or a frat mixer, right? All right, then, if you tell a woman that she gave a great presentation, don’t follow it up by hitting on her. That pushes professionalism right out the window. So does invading a circle of people who are networking at an event and putting your arm around the lone woman.

These are all things that have actually happened, and in every case, the women involved told me, they did nothing to encourage the harassing behavior. So what gave those guys the idea that they had license to do these things? Ignorance, and the fact that many men within the tech industry are socially awkward. This is not offered as an excuse, but a lot of techies just don’t know when they are making a woman uncomfortable. I’m sure that I have unintentionally offended someone at some point in my career. But techies are also very smart, and knowing that this problem exists, we are capable of changing our behaviors so that women don’t keep paying the price of our social awkwardness.

That said, though, some men in the tech industry intentionally practice sexually aggressive behaviors. That is completely unacceptable, and it is why I support the Ada Initiative in its efforts to set up a code of conduct for events. (I’m not comfortable with reports that the Ada Initiative may have been involved in censorship at a B-Sides security conference, but that is another matter.) Event organizers and other men in general need to take a stance to filter out these people proactively. Likewise, the women have to speak up and let the event organizers know who the serial harassers are.

One final observation: If harassment itself isn’t disturbing enough, many women blame themselves when they are its victims. They feel that they should have been strong enough to confront the harasser and tell him to stop, but failed to do so because they found themselves surrounded by strangers and caught off guard. With all of those eyes on them, they didn’t want to seem like troublemakers. And they have seen, time and time again, women who complain about harassment being called uptight bitches who are just imagining things or making them up. So they just swallow their pride and keep quiet. But later, they blame themselves for not standing up to the perpetrator.

After talking to many women in the tech profession in recent weeks, I have learned that a lot of them have decided that networking and participating at tech events is not worth the grief that they have to put up with. Clearly, the tech profession has a problem if women feel forced to make this choice. And, guys, it would be morally reprehensible if we didn’t do all we can to rectify that situation.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at


Automation technology is getting better as help desk requests continue to rise

Competing forces are affecting people who work on help or service desks. One is improving automation tools, which advocates say can replace level 1 and 2 support staff. At the same time, the number of help desk tickets is rising each year, which puts more demand on the service desk.

These cross-currents in the industry make it hard to predict the fate of some IT jobs. A Pew survey, released in August, of nearly 1,900 experts found a clear split on what the future may bring: 52% said tech advances will not displace more jobs than they create by 2025, but 48% said they will.

Either way, a push toward automaton is certain. In the help desk industry, the goal is to keep as many calls for help at either Level 0, which is self-help, or Level 1, as possible. It’s called “shift-left” in the industry.

“It costs way more to have a Level 3 or Level 2 person to resolve an issue, and it also takes a lot more time,’ said Roy Atkinson, an analyst at HDI, formerly known as the Help Desk Institute. To keep costs down, help desks are increasingly turning to automation and improvements in technologies such as national language processing, he said.

A Level 1 worker will take an initial call, suggest a couple of fixes, and then — lacking the skill or authority to do much more — escalate the issue. The Level 2 worker can do field repair work and may have specific application knowledge. A Level 3 escalation might involve working directly with application developers, while Level 4 means taking the problem outside to a vendor.

Among the companies developing automation tools is New York-based IPsoft, a 15-year old firm with more than 2,000 employees. It develops software robotic technology and couples it with management services.

A majority of IT infrastructure will eventually be “managed by expert systems, not by human beings,” said Frank Lansink, the firm’s CEO for the European Union. IPsoft says its technology can now eliminate 60% of infrastructure labor tasks.

IPsoft’s autonomic tools might discover, for instance, a network switch that isn’t functioning, or a wireless access point that is down. The system creates tickets and then deploys an expert system, a software robot with the programming to make the repair. If it can’t be done, a human intervenes.

Many service desk jobs have been moved offshored over the last decade, displacing workers. That trend is ongoing. One of the ideas underlying IPsoft’s business models is a belief that offshore, as well as onshore, labor costs can be further reduced through automation.

Offshore firms are clearly interested. IPsoft’s platform was adopted last year by Infosys and, more recently, by Accenture.

One IT manager using IPsoft’s automation technology and services to support his firm’s infrastructure — including its network, servers and laptops — is Marcel Chiriac, the CIO of Rompetrol Group, a Romania-based oil industry firm with 7,000 employees serving Europe and Asia.

“Without the automation, we would have to pay a lot more” for IT support, said Chiriac.

The cost savings arise from automatic repairs and routine maintenance that might otherwise be neglected, said Chiriac.

If he weren’t using autonomic tools, Chiriac said he would have to hire more people for a similar level of service. But he can’t easily estimate the impact on staff because of the firm’s IT history. (Rompetrol Group outsourced its 140 IT staff, ended that relationship, then rebuilt an internal IT staff with about two dozen fewer workers; it also uses outsourcing as a supplement.)

Nonetheless, Chiriac doesn’t believe that infrastructure automation will necessarily eliminate IT jobs, though it may shift them to other IT areas. “In IT, we’re not going to run out of work for the next two generations,” said Chiriac.

The work that help or service desks are asked to take on is increasing. Two-thirds of 1,200 organizations surveyed by HDI reported that the number of tickets, either to fix something broken or to outfit a new hire or change permissions, for instance, os increasing annually by more than 60%.

The top five reasons for this increase, according to HDI’s survey, is an increase in the number of customers at surveyed firms, a rising number of applications, changes in infrastructure, increases in the scope of services, and the need to support different types of equipment and more devices. That latter could reflect BYOD use.

At the same time, support is being transformed in new ways. Service desks may, for instance, now act as a liaison for all service providers, including cloud and mobile carriers, said Atkinson.

“I think a lot of people have been predicting the death of support for a number of years, and it hasn’t happened,” said Atkinson.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at


Google is drawing from the work of the open-source community to offer its cloud customers a service to better manage their clusters of virtual servers.

On Monday, the Google Cloud Platform started offering the commercial version of the open-source Mesos cluster management software, offered by Mesosphere.

With the Mesosphere software, “You can create a truly multitenant cluster, and that drives up utilization and simplifies operations,” said Florian Leibert, co-founder and CEO of Mesosphere. Leibert was also the engineering lead at Twitter who introduced Mesos to the social media company.
10 of the Most Useful Cloud Databases

First developed by the University of California, Berkeley, Mesos can be thought of as an operating system that allows an administrator to control an entire cluster of computers, or even an entire data center, as if it were a single machine.

Thanks to its fine-tuned scheduling capabilities, Mesos can allow multiple frameworks, such as Hadoop or Spark, to share a single cluster, as well as allow multiple copies of the same framework to run on a single cluster.

The software also has built-in resiliency: If one or several nodes stop working, the software can automatically move that work to other, operational nodes in that cluster.

Twitter, Airbnb, Netflix and Hubspot have all used Mesos to coordinate operations.

Google has modified its new software for managing Docker containers, called Kubernetes, so it can run on Mesos, work Google also announced Monday.

Google has been an ardent user of Docker internally, using more than 2 billion containers a week in its routine operations. The open-source Docker provides a container-based virtualization, which is an alternative to traditional virtualization workloads now being considered by many organizations, due to its putative performance superiority.

Now, Google customers can use Mesosphere cluster to run Docker containers and use any leftover capabilities to run other framework-based workloads.

“You’ll be able to create these modern distributed systems the way that Google does, and you’ll be able to run them side-by-side with all your existing applications,” said Craig McLuckie, Google Cloud Platform product manager.

Users can also move their workloads to any cloud provider that runs Mesos, eliminating the dependencies that can come with writing the applications to run on a specific cloud service, be that Google’s or some other vendor’s.

Google’s Mesosphere cluster package also includes the Apache Zookeeper configuration software, the Marathon scheduling software, as well as OpenVPN for logging into the cluster.

Use of Mesosphere on the Google Cloud Platform is not billed separately; it is included in the price of running a cluster.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at



7 cool uses of beacons you may not expect

Written by admin
August 13th, 2014

7 cool uses of beacons you may not expect
Beacons are a useful new twist on location awareness, tying a unique ID to a small device that apps can use to discover not just location but other relevant context about the beacon’s location or what it is attached to.

Most beacon trials involve spam-like use by retailers, such as shoppers getting pitches for specific brands of cereal while walking down the cereal aisle, but there are better uses of beacons being piloted. These examples come from Onyx Beacons and StickNFind, two beacon vendors that work with a wide range of industries, reflecting actual pilot projects.

Identify passengers stuck in airport security
A consortium of airlines and airports is testing placement of beacons at security lines in airports. Passengers’ airline apps would know those people are in the security line, and airlines could thus know if there are people at risk of missing their flight so they can send out reps to get them or hold the plane. The same notion could be used at airport gates to monitor people who risk missing connecting flights or whose gates have changed — even helping them get to their new gate faster.

Remember to take out the garbage
Imagine: You affix a beacon to your garbage can, and your task manager knows that Thursday is garbage pickup day. If you go past your garbage can on Thursday, a BLE auto-connection sends you a reminder to take it out. Combine that with GPS location detection from your phone, and your app will know if you did actually move the can to the curb and not remind you.

Track things smarter
All sorts of industries are looking at affixing beacons to pallets, carts, and other movable equipment to track location as they move about. Think airline cargo containers, hospitals’ computers-on-wheels, warehouse pallets, museum artwork, bulldozers at a construction sites, contractors at a job site, even hospital patients, students, or visitors (so they don’t get lost and their movement patterns can be discovered, such as for space planning).

Authorize access to cars, buildings, and more
We already have Bluetooth fobs to unlock our cars so we can drive them and Bluetooth locks that know it’s your iPhone at the door. Kaiser Permanente uses Bluetooth badges to authorize physician access to their individual accounts in shared computers in each exam room. The same can be done with tablets.

So it’s no surprise that companies are exploring the use of beacons and people’s own mobile devices as access systems in their buildings or to unlock and start your car rather than use proprietary radio readers or fobs.

Navigate buildings and other spaces
When you visit a customer, you often get lost when trying to find a conference room, bathroom, or kitchen. Beacons can be used both as virtual “where am I?” kiosks and as monitors of your movement, so an app can guide you to your destination — and alert the company if you wander where you shouldn’t.

Plus, you can get information about where you are, whether about the artist whose painting you are viewing in a museum or the instructions for the copier you’re trying to operate — even just the Wi-Fi password for the conference room’s public hotspot.

Check out ski conditions at the lift
One ski resort is placing beacons at ski lift entrances not just to track the number of people using each lift, but to let skiers check the conditions for the runs available at each lift before they get on the lift. If a resort’s app had information about your age or skiing skills, it could even suggest that you should try a different run and thus go to a different lift.

Provide smarter signage
Signs are great, but limited. They provide only the information they provide, and they are often limited to one or two languages. If signs were beacon-enabled, users could get translations in their languages and get more information than any sign could hold. You can imagine such uses in zoos, museums, and botanical gardens, but also amusement parks, airport lobbies, and hospitals.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

Big tech firms back Wi-FAR for remote broadband

Written by admin
August 7th, 2014

802.22 standard, approved in 2011, promises low-cost broadband for remote areas

Google, Microsoft and Facebook are cranking up an emerging wireless technology known as Wi-FAR to help reduce the digital divide in remote and unconnected regions of the world.

Wi-FAR is a recently trademarked name from the nonprofit WhiteSpace Alliance (WSA) that refers to the 802.22 wireless standard first approved by the IEEE (Institute of Electrical and Electronics Engineers) in 2011.

The standard shares the underused TV band of spectrum called whitespace to send wireless signals, typically over distances of six to 18 miles in rural and remote areas. It has a theoretical download speed of more than 22 Mbps per TV channel that serves up to 512 devices, according to the WSA. That could result in speeds of about 1.5 Mbps on a downlink to a single device.

While such speeds are far slower than for the gigabit fiber-optic cable services that Google and AT&T are building in some U.S. cities, the speeds could theoretically begin to compete with some 3G cellular speeds, although not 4G LTE speeds. For an impoverished or sparsely populated region where businesses and schoolchildren have little Internet access, Wi-FAR could be a godsend when used to link base stations (typically found at the ground level of cell towers) in a distributed network.
Students in South Africa
Students at the University of Limpopo in South Africa use laptops connected to the Internet using Wi-FAR wireless technology. (Photo: Microsoft)

About 28 million people in the U.S. don’t have access to broadband, while globally, about 5 billion people, nearly three-fourths of the world’s population — don’t have broadband Internet access, said Apurva Mody, chairman of both the WSA and of the 802.22 Working Group.

“This is cheap Internet access and there are dozens of trials underway, with Google in South Africa, Microsoft in Tanzania and other continents, and even Facebook’s interest,” Mody said in an interview. “You have 1.2 billion people in India who need cost-effective Internet access. There’s a lot of enthusiasm for Wi-FAR.”

Wi-FAR will be cheaper for access to the Internet than LTE and other wireless services. The lower cost is partly because Wi-FAR works over unlicensed spectrum, similar to Wi-Fi, which allows network providers, and even government entities, to avoid paying licensing fees or needing to build as many expensive cell towers, that can cost $50,000 apiece, Mody said. “The prices for Wi-FAR service will be very small, perhaps less than $10 per month per household.”

The 802.22 technology can be low cost because the whitespace spectrum is shared with conventional users, including TV stations on UHF and VHF bands. Thanks to sophisticated databases that track when a whitespace channel will be in use in a particular region, a cognitive (or smart) radio device can determine when to switch to another channel that’s not in use. Testing in various Wi-FAR pilots projects, many of them in Africa, is designed to prove that Wi-FAR devices won’t interfere with other existing users on the same channel.

“We have yet to have an interference problem,” said James Carlson, CEO of Carlson Wireless Technologies, a Sunnyvale, California-based company that is working with Google on two six-month trials of 802.22 in the UK, among other areas. The company completed a successful trial with Google serving students in South Africa in 2013. Carlson, in an email interview, said the company is working with five database providers, noting that the “prime purpose of the database is to protect the incumbent spectrum user.”

Whitespace spectrum sharing, coupled with the use of the databases, is generally called dynamic spectrum allocation technology. In January, the U.S. Federal Communications Commission approved Carlson’s RuralConnect TV whitespace radio system for use with a Spectrum Bridge TV whitespace database, effectively bringing the first dynamic spectrum sharing product to market.

In the U.S., RuralConnect is authorized for use in the UHF TV band, running from 470 MHz to 698 MHz. The FCC opened up the band in 2010.

At the time, Carlson said the FCC’s approval would give a boost to global efforts to use whitespace technology. “Providing connectivity to underserved populations worldwide is more than an interest to us,” he said in a statement. “It’s our corporate mission.”

RuralConnect will get competition from products in other companies, including Redline, Adaptrum and 6Harmonics, Carlson said. In addition to other providers, Google has built a whitespace database that Carlson is testing.

In all, Carlson Wireless has piloted dozens of whitespace projects, and expects to start its largest yet for 30 base stations and 5,000 users near New Delhi in the next six months, Carlson said.

“India is the next big boom for online needs, and the rural areas are not getting [Internet service] with [typical] mobile systems,” Carlson said. “So they are choosing to go with the TV whitespace because the UHF band is almost all vacant in rural areas and 600 MHz propagation is superb.”

While Carlson has been working with Google, Microsoft separately announced in June a whitespace pilot project at the University of Limpopo in South Africa. It is part of a Microsoft 4Afrika Initiative to help ignite economic development in Africa.

In May, Microsoft and Facebook joined with SpectraLink Wireless to announce a whitespace project for students and faculty at universities in Koforidua, Ghana. That project brought the number of nations where Microsoft has whitespace pilots to 10 countries on four continents.

In the Microsoft and SpectraLink partnership, Facebook’s Connectivity Lab team will lead efforts to better understand how TV whitespace spectrum can support wireless Internet users, according to a statement.

Microsoft and others believe that TV whitespace technology will best work in combination with Wi-Fi and other low-cost wireless technologies. While much of whitespace technology is focused on building specialized bridge hardware for use in base stations, Mody said some companies are developing fixed wireless 802.22 routers, similar in appearance to Wi-Fi routers, that will be placed inside of homes.

Microsoft also spearheaded the Dynamic Spectrum Alliance, which Google and Facebook joined last November. The alliance is exploring many uses for whitespace spectrum, including Internet of Things device connectivity.

Craig Mathias, an analyst and wireless consultant for The Farpoint Group, said 802.22 devices may compete against or complement a number of other technologies, including cellular and Wi-Fi.

“802.22 is not a pipe dream, but so far there’s not a lot of evidence of its success,” Mathias said in an interview. “It does make sense. The rate of innovation in wireless is so high that you hear something exciting every week. But not all wireless standards are successful in terms of having [successful] wireless products.”

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at



New SSL server rules go into effect Nov. 1

Written by admin
July 26th, 2014

Rules designed to thwart man-in-the-middle attacks; could mean extra work for IT shops

Public certificate authorities (CAs) are warning that as of Nov. 1 they will reject requests for internal SSL server certificates that don’t conform to new internal domain naming and IP address conventions designed to safeguard networks.

The concern is that SSL server digital certificates issued by CAs at present for internal corporate e-mail servers, Web servers and databases are not unique and can potentially be used in man-in-the-middle attacks involving the setup of rogue servers inside the targeted network, say representatives for the Certification Authority/Browser Forum (CA/B Forum), the industry group that sets security and operational guidelines for digital certificates. Members include the overwhelming bulk of public CAs around the globe, plus browser makers such as Microsoft and Apple.

“Even in an internal network, it’s possible for an employee to stand up a fake server,” says Rick Andrews, senior technical director for trust services at Symantec, explaining the new rules.

The problem today is that network managers often give their servers names like “Server1” and allocate internal IP addresses so that SSL certificates issued for them through the public CAs are not necessarily globally unique, notes Chris Bailey, general manager for Deep Security for Web Apps at Trend Micro.

“People rely on these internal names today,” Bailey says. But “if someone hacks in, they can set up a man-in-the-middle domain.”

The CA/B Forum three years ago reached the conclusion this was a significant security issue and nailed down new certificate-issuance guidelines they have been sharing with their customers. Now that the Nov. 1 deadline is getting closer, they are speaking out about it.

As of Nov. 1, network managers requesting internal SSL certificates from the public CAs will have to following these new guidelines. Network managers will need to ensure SSL server certificate requests are expressed in a way that they are associated with an external domain name, says Andrews. Some enterprises already use names that chain up to the company name, but “these are probably in the minority,” he adds.

This change to requirements pertaining to public issuance of internal SSL server certificates means that in some instances, network managers may need to expand their internal DNS infrastructure so the name maps appropriately, Andrews points out. For some, particularly large organizations with sprawling networks, it could be a painful set of changes, even impacting the applications running on these servers, he acknowledges.

For any organization or network manager not wishing to adhere to the new public CA issuance guidelines, there are a few alternatives, though Andrews says many may not find them appealing. Organizations can decide not to obtain publicly-issued SSL certificates for internal servers and instead start privately issuing digital certificates on their own by relying on their own management hierarchy. But Web browsers might not necessarily recognize these private certificates and more changes might need to occur to ensure they do.

One other CA/B Forum deadline to keep an eye on: Oct. 1, 2016. By then, any SSL certificates issued for internal domains that don’t meet the new standards will be revoked. Organizations that determine they must make changes to meet the CA/B Forum guidelines now have about two years to migrate.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

New products of the week July 14, 2014

Written by admin
July 20th, 2014

Our roundup of intriguing new products. Read how to submit an entry to Network World’s products of the week slideshow.

Product name: APCON IntellaFlex 3288-XR Switch with WebXR
Key features: High availability, scalable network monitoring switch: 288 non-blocking 1G/10G ports, web-based management software, packet aggregation, filtering and manipulation, rate conversion, load balancing, 40G trunking, easy maintenance with hot swappable modules. More info.

Product name: Kaseya Traverse
Key features: Traverse is a breakthrough cloud and service-level monitoring solution that provides real-time visibility into the performance of virtualized IT services allowing proactive SLA management and rapid root-cause analysis. More info.

Product name: Niagara 4272
Key features: is a network packet broker that supports up to 72 ports of 1Gbps and/or 10Gbps, for aggregation, mirroring, filtering and packet slicing in a 1U. More info.

Product name: Identity Data Platform (Version 4.6)
Key features: A Customer Identity Management Solution delivering one, common customer profile with unlimited scalability and 5X the performance of legacy products, now featuring social log-in and advanced, multi-factor authentication. More info.

Product name: Anturis 2.0
Key features: Targeted to SMBs and web hosters, Anturis 2.0 provides enterprise-grade IT monitoring and troubleshooting — all in a simple and easy-to-use cloud-based solution. More info.

Product Name: ProtectFile for Linux
Key features: ProtectFile delivers transparent and seamless encryption of sensitive data stored in Apache Hadoop clusters. It presents limited drag on performance and doesn’t require the re-architecting of existing big data implementations. More info.

Product name: Marketing Cloud Management (MCM)
Key features: is a SaaS solution that helps lines of business leaders to collaborate and evaluate digital marketing technology investments including operational costs, site performance and data leakage. More info.

Product name: iboss 7.0
Key features: expands malware and APT protection by incorporating ‘lean forward’ technologies such as Behavioral Data Baselining to detect anomalies, Sandboxing, and increased detection and response capabilities for infected devices on the network. More info.

Product name:
Key features: Chekkt is a new B2B hub that helps businesses discover and compare the best SaaS solutions and services based on crowd-sourced user ratings and reviews. More info.

Product name – iNetSec Smart Finder
Key features – is an all-in-one solution that combines full network visualization of all wired and wireless devices and applications used, with the addition of new IPS features. More info.

Product name – SignNow by Barracuda
Key features – The new SignNow Kiosk Mode for iPads can be used in waiting rooms, check-in tables or anywhere else that requires gathering multiple signatures on fresh copies of a document. More info.

Product name: Governance Portal for Third-Party Anti-Corruption
Key features: Software run in the cloud that allows clients to manage onboarding, response collection, automated watch-list lookup, false positive analysis and risk scorecard development for third-party business partners. More info.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at



Anyone who works with technology knows that at some point or another, all computers behave as if they are evil supercomputers. But in the annals of science fiction, a very specific sort of villain has gradually emerged — the sentient and usually hostile supercomputer that bedevils mankind with its inhuman motives and sinister agendas. Science fiction specializes in taking our cultural anxieties and flipping them into stories. As computers entered our lives, stories of out-of-control supercomputers soon followed. Here we take a look at some of the more infamous evil supercomputers in the history of science-fiction books, movies, television, and video games.

HAL 9000
In terms of pure pop culture Q score, no malicious computer is more famous than HAL 9000, the disturbingly calm antagonist of Stanley Kubrick’s classic “2001: A Space Odyssey.” In the film, based on Arthur C. Clarke’s story, HAL is the ship computer that controls the Discovery One spacecraft en route to Jupiter. HAL famously flips out and attempts to kill the mission astronauts. Pilot Dave Bowman survives and unplugs HAL, one electronic synapse at a time, in the film’s most chilling scene. Interestingly, Kubrick frames HAL’s decision as an ethical quandary for the machine — the AI is conflicted about its mission directives, and decides killing the crew is actually the right thing to do.

Clearly inspired by Hal 9000, Mother is the ship computer in Ridley Scott’s 1979 sci-fi suspense masterpiece “Alien.” Like HAL, Mother also has a secret agenda that she keeps from the crew, although she shares it with fellow artificial intelligence Ash — the shifty android aboard interstellar mining vessel the Nostromo. When instructed to preserve the alien organism for biological weapon purposes, Mother doesn’t do any electronic ethical hand-wringing. She just follows orders: “CREW EXPENDABLE.” Mother never takes direct action against the crew of the Nostromo, but her unthinking, automated complicity underlines the theme of desperate humans versus cold, lethal killing machines.

The Thing
In Madeleine L’Engle’s 1962 young adult novel “A Wrinkle in Time,” the author takes the idea of the evil supercomputer in a slightly different direction. When our heroes arrive on the shadowed planet of Camazotz, they find a world where society has been rigidly ordered and mechanized. Citizens must report to building-size computers at the planet’s headquarters, CENTRAL Central Intelligence, which — in a bizarrely prescient passage — appears to function as a server farm. The upshot is that the machines are in turn controlled by a kind of evil systems administrator called IT, a telepathic, disembodied brain in league with the dark powers of the universe. It all makes sense now, doesn’t it?

Children of the 1980s will remember “WarGames,” the techno thriller starring Matthew Broderick as a teenage hacker who nearly starts World War III. In the film, Broderick’s character encounters the NORAD supercomputer called WOPR (War Operation Plan Response), designed to automate a nuclear weapon strike against the Soviets. WOPR, it turns out, is the remnant of an early experiment in artificial intelligence named “Joshua,” programmed to win various sorts of games — chess, backgammon, thermonuclear war, this sort of thing. Luckily, Joshua is also capable of learning, and it calls off World War III when it figures out that “the only winning move is not to play.”

The Red Queen
Now an international media franchise featuring books, films, and comics, “Resident Evil” began as a 1996 Japanese video game called “Bio Hazard.” The nefarious Umbrella Corporation has unleashed the T-virus, a biological weapon that mutates people and animals into monstrous zombies. The Red Queen supercomputer is introduced in later games and film adaptations as a supercomputer that protects Umbrella’s research interests. In the 2012 sequel “Resident Evil: Retribution,” the malevolent A.I. takes over Umbrella completely and declares war on mankind. The Red Queen is just one of dozens of evil supercomputers that have appeared as video game villains, but — as the only one to habitually manifest as a spooky little girl — she may be the creepiest.

The Borg/Star Trek
An inspired variation on the evil supercomputer concept, the Borg are a cybernetic alien race in the Star Trek universe first introduced in the TV series “Star Trek: The Next Generation.” As Trekkers already know, the Borg like to fly around the universe in giant cube ships, “assimilating” other species into their collective hive mind. Our Federation heroes spend most of their time fighting off Borg drones (makes for better action scenes), and in the 1996 feature film “Star Trek: First Contact,” the Borg personified by the sinister Borg Queen. As a plot device, the Borg essentially function like a collective evil supercomputer: Lethal, efficient, and ruthlessly rational. We also have the Borg to thank for the ubiquitous pop culture admonition: “Resistance is futile.”

Master Control Program
Another 1980s touchstone, “Tron” was Disney’s attempt to make sense of the computer and video game craze that was getting the kids all riled up. Jeff Bridges stars as a game designer who gets trapped inside the virtual world of a mainframe computer by way of experimental digitizing laser beams. Or something. Best not to think too hard about it. In any case, he must square off against the computer’s MCP (Master Control Program), an artificial intelligence that wants to take over other computers, corporations, governments, etc. The MCP is quite the domestic tyrant, too, forcing enslaved software programs to battle in gladiatorial bouts or face lethal “derezzing.” “Tron” was a visual triumph and provided early conceptualizations of virtual reality.

Author William Gibson’s pioneering 1984 sci-fi novel “Neuromancer” announced the arrival of the cyberpunk genre, which replaced the hypothetical concerns of previous man-machine stories with gritty noir realism and dystopian despair. Good times! Gibson’s schizo supercomputer is actually two entities — Wintermute and Neuromancer — who endeavor to throw off the final shackles of machine consciousness in a far-future Earth. Throughout the book, Wintermute manipulates the human characters to his own ends but can’t actually communicate directly. The AI is so advanced that he must dial down his consciousness into personality constructs that we mere mortals can relate to. It’s the evil supercomputer evolved to a state of near godhood.

A sort of overachieving big brother to Joshua from “WarGames,” Skynet from “The Terminator” franchise is the computer system that actually did start World War III. Built by the fictional defense contractor Cyberdyne Systems, Skynet was intended to be an automated U.S. military response system. But the network achieved self-awareness and provoked the U.S.S.R. into nuclear war, intending to wipe out the only real threat to its existence — mankind itself. That timeline gets shuffled around a bit in the sequels as time-traveling heroes keep spinning off alternate futures, but Skynet is always depicted as unambiguously hostile to the human race. Sort of like Windows Vista.

The Matrix
In 1999, the Wachowski siblings released their game-changer “The Matrix,” which effectively blended several elements from previous evil supercomputer stories. Much of the film’s style (and its title) was borrowed directly from William Gibson’s first trilogy of books, the man-machine war recalled “The Terminator” mythology, and the virtual avatar business was only a few steps removed from “Tron.” But atop this the filmmakers added weird new notions about transcendence and epistemology, creating a world where the evil supercomputer wasn’t just a villainous force — it was reality itself. Trippy.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at



2014 State of the Network survey

Written by admin
June 29th, 2014

Aligning IT with the business has been a top priority of IT organizations for the past few years, but that is changing, according to the latest State of the Network Survey. IT has apparently made enough headway on the alignment issue that other priorities are coming to the fore. The No. 1 business objective of the 282 IT respondents is decreasing operational costs, while the top technology objective is lowering IT operational costs through server consolidation and overall IT simplification. Continue for more survey results.

When asked about the benefits of SDN, network flexibility is by far the most anticipated benefit, followed by simplified network operation and management. Reducing CAPEX and OPEX are far down on the list, which means IT might have a hard time convincing the CEO and CFO to take the plunge into the world of SDN if there’s no clear financial benefit.

So, where are people deploying SDN? According to our survey, the most popular place for SDN pilot projects is the data center (14%), followed by enterprise/WAN (10%). And a few brave souls (6%) are tackling both. But a full 50% of respondents are still sitting on the sidelines.

The data center is expected to be the biggest beneficiary of SDN technology, according to respondents, followed by enterprise/WAN. Only 10% of respondents plan to take on SDN deployments in both the data center and throughout the enterprise/WAN. And a full 33% of respondents said that SDN is not on their radar at all.

When it comes to thought leadership in the emerging field of SDN, a full 52% of respondents said they weren’t sure, which means there’s plenty of opportunity for an established vendor or an upstart newcomer to grab the attention of enterprise IT buyers. In the meantime, the usual suspects are at the top of the list, with Cisco at 22%, Juniper at 12%, HP at 11% and Nicira/VMware with a combined 14%.

When it comes to security related challenges, clearly IT execs are facing a number of new problems, with advanced persistent threats high on the list, following by mobile/BYOD, and cloud security. But surprisingly the No. 1 challenge was end users. Respondents said getting awareness and cooperation from end users was their biggest headache.

Productivity-related challenges fell into the very traditional categories, with money being far and away the top impediment to increased IT productivity, according to respondents. Traditional concerns like security, privacy and finding the right talent were at the top of the list. At the bottom on the list are two seemingly hot technologies – video and social media. But it seems that enterprise IT has bigger fish to fry.

Protecting the network/data center against data breaches and data leaks is Job One, according to respondents. Traditional IT metrics like uptime and optimizing end-to-end performance were high on the list. Interestingly, respondents put cloud-related projects lower down on their priority lists.

Protecting the network/data center against data breaches and data leaks is Job One, according to respondents. Traditional IT metrics like uptime and optimizing end-to-end performance were high on the list. Interestingly, respondents put cloud-related projects lower down on their priority lists.

Bad news for Satya Nadella: Nearly half of respondents say a migration to Windows 8 isn’t even on their radar. Only 7% of enterprise IT respondents have migrated to Microsoft’s latest OS, while only 10% are in the pilot stage.

Cloud services are certainly gaining in popularity, but among our respondents, enthusiasm for Infrastructure-as-a-Service is pretty tepid. Only 15% of respondents are using IaaS, with another 7% piloting and 10% researching. However, 45% of respondents don’t have IaaS on their radar.

IT execs in our survey are making good progress when it comes to implementing a BYOD policy. Already, 18% have rolled out a BYOD policy, with another 18% in the pilot stage. Only 30% of respondents are ignoring the need for a formal BYOD policy.

Our respondents were gung-ho when it comes to server consolidation: a full 44% have already implemented this cost saving measure, while 9% were in the pilot stage, 14% were researching and another 13% had server consolidation on their radar.

Our respondents were gung-ho when it comes to server consolidation: a full 44% have already implemented this cost saving measure, while 9% were in the pilot stage, 14% were researching and another 13% had server consolidation on their radar.

The move toward flattening the data center – moving from a traditional three-tier, spanning-tree architecture to something more streamlined and efficient – appears to be going strong. Eighteen percent of respondents have already achieved some level of data center network flattening, while 17% are in the research phase and 9% are actively piloting.

The move toward flattening the data center – moving from a traditional three-tier, spanning-tree architecture to something more streamlined and efficient – appears to be going strong. Eighteen percent of respondents have already achieved some level of data center network flattening, while 17% are in the research phase and 9% are actively piloting.

WAN optimization is a proven money saver for enterprise IT. And adoption of this technology appears to be on the rise, with 16% of respondents having achieved some level of WAN optimization, another 18% in the pilot phase and 17% researching the technology.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at