NSS Labs recently released the results and analysis from its latest Browser Security Comparative Analysis Report, which evaluated the ability of eight leading browsers — Apple Safari, Google Chrome, Kingsoft Liebao, Microsoft Internet Explorer, Mozilla Firefox, Opera, Qihoo 360 Safe Browser, and Sogou Explorer — to block against socially engineered malware (SEM). The use of social engineering to distribute malware continues to account for the bulk of cyber attacks against both consumers and enterprises, thereby making a browser’s ability to protect against these kinds of attacks an important criterion for personal or corporate use.

Microsoft Internet Explorer continues to outperform other browsers. With an average block rate of 99.9 percent, the highest zero-hour block rate, fastest average time to block, and highest consistency of protection over time percentages, Internet Explorer leads in all key test areas.

Google Chrome remained in the top three, but its average block rate fell significantly to 70.7 percent, down from 83.17 percent in the previous test.

Cloud-based endpoint protection (EPP) file scanning provides substantial defenses when integrated with the browser. Kingsoft Liebao browser utilizes the same cloud-based file scanning system used by Kingsoft antivirus and had the second highest overall block rate at 85.1 percent, ahead of Chrome by almost 15 percentage points.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Google’s Safe Browsing API does not provide adequate SEM protection. Apple Safari and Mozilla Firefox both utilize the Google Safe Browsing API and were the two lowest performing browsers in this latest test. Both also saw significant drops of around 6 percent in their average block rates — Safari from 10.15 percent to 4.1 percent and Firefox from 9.92 percent to 4.2 percent.

Chinese browsers tested for the first time prove viable. This year, three browsers from China were included in testing for the first time, and Kingsoft’s Liebao browser jumped ahead of Google Chrome with an overall protection rate of 85.1 percent. Sogou Explorer had the fourth highest average block rate at 60.1 percent.

Commentary: NSS Labs Research Director Randy Abrams
“Selecting a browser with robust socially engineered malware protection is one of the most critical choices consumers and enterprises can make to protect themselves. Microsoft’s SmartScreen Application Reputation technology continues to provide Internet Explorer the most effective protection against socially engineered malware,” said Randy Abrams, Research Director at NSS Labs. “This year NSS added three browsers from China. The Kingsoft Liebao browser displaced Chrome from second place by using a combination of URL filtering with the cloud-based file scanning technology that Kingsoft uses for their antivirus product. Sogou Explorer, another browser from China, was the only other tested browser to exceed 50 percent protection against socially engineered malware. Firefox and Safari failed to achieve five percent effectiveness and leave less technical users at considerable risk.”

NSS Labs recommendations
Learn to identify social engineering attacks in order to maximize protection against SEM and other social engineering attacks.
Use caution when sharing links from friends and other trusted contacts, such as banks. Waiting just one day before clicking on a link can significantly reduce risk.
Enterprises should review current security reports when selecting a browser. Do not assume the browser market is static.

The ‘always-on’ IT culture: Get used to it

Written by admin
April 9th, 2014

Around-the-clock accessibility is now expected for a broad range of IT roles. Here’s how to cope.

A couple of weeks into his job as lead QT developer at software development consultancy Opensoft, Louis Meadows heard a knock on his door sometime after midnight. On his doorstep was a colleague, cellphone and laptop in hand, ready to launch a Web session with the company CEO and a Japan-based technology partner to kick off the next project.

“It was a little bit of a surprise because I had to immediately get into the conversation, but I had no problem with it because midnight here is work time in Tokyo,” says Meadows, who adds that after more than three decades as a developer, he has accepted that being available 24/7 goes with the territory of IT. “It doesn’t bother me — it’s like living next to the train tracks. After a while, you forget the train is there.”

Not every IT professional is as accepting as Meadows of the growing demand for around-the-clock accessibility, whether the commitment is as simple as fielding emails on weekends or as extreme as attending an impromptu meeting in the middle of the night. With smartphones and Web access pretty much standard fare among business professionals, people in a broad range of IT positions — not just on-call roles like help desk technician or network administrator — are expected to be an email or text message away, even during nontraditional working hours.

The results of Computerworld’s 2014 Salary Survey confirm that the “always-on” mentality is prevalent in IT. Fifty-five percent of the 3,673 respondents said they communicate “frequently” or “very frequently” with the office in the evening, on weekends and holidays, and even when they’re on vacation.

Read the full report: Computerworld IT Salary Survey 2014

TEKsystems reported similar findings in its “Stress & Pride” survey issued last May. According to the IT services and staffing firm, 41% of those polled said they were expected to be available 24/7 while 38% said they had to be accessible only during the traditional work hours of 8 a.m. to 6 p.m. The remaining 21% fell somewhere in between.

“Being on all the time is the new normal,” says Jason Hayman, market research manager at TEKsystems. “[Bring-your-own-device] trends and flexible work arrangements have obliterated the traditional split between work and nonwork time, and IT gets hit hard.”
The reality of staying relevant

Around-the-clock accessibility is not only part of the IT job description today, it’s the reality of staying relevant in a climate where so many IT roles are outsourced overseas, according to Meadows. “Work can be done much cheaper in India, Russia or China,” he says. “So you need to be able to get things done as fast as stuff happens in other places, and many more work hours are required to make that happen. When you sign up for this job, that’s just the way it is.”
Checking in

How frequently, on average, do you check messages or communicate with your office during nonscheduled work hours such as evenings, weekends, holidays or vacation?

Being available may be part of the job, but demands can become onerous, notes Robert Sample, formerly a senior technical analyst with Cox Media Group. “When I started in the 1998 to 1999 time frame, a person would be on call for a week, and typically you might get one or two contacts during off hours,” says Sample, who is currently between jobs. “Over the last few years, the change has been toward immediate responsiveness and more active involvement.”

At Cox Media, Sample was issued a BlackBerry that pinged him with an email alert when a trouble ticket was started. “Our SLA [service-level agreement] specified a response within four hours no matter what,” he says. “That goal didn’t even consider whether it was [during] work hours.”

Many IT professionals say they’ve made a routine of frequent check-ins. It helps avert problems and makes the workday smoother, they say, since there often isn’t enough time during traditional hours to get everything done. That’s partly what motivates Merlyn Reeves to make herself available around the clock.

A project manager for a network communications provider, Reeves works from home. She says the need to coordinate with colleagues in different time zones means she might have to chair a conference call at 7 a.m. or respond to emails while watching 60 Minutes on a Sunday night. She keeps her cellphone bedside so she can respond to the occasional email at night, and she works on Sundays to get a jump-start on the week.

Reeves says she doesn’t do that because her managers expect it; rather, it’s her personal work ethic that drives her. “It’s not spoken that it’s expected, and if I didn’t respond at 8 p.m. on Sunday night, no one would chastise me,” she says. “But as a project manager, I don’t ever want to be the holdup to getting something done.”
Making 24/7 work

Work ethic aside, Reeves and other IT professionals have developed strategies for managing the “always-on” requirement in the hopes of creating a modicum of work/life balance. Reeves won’t wade in on certain email discussions during off-hours, and she’s learned to take vacation during Christmas week, when many people aren’t working, so she can unplug without the stress.

Sample has also changed the way he vacations. “I’ve started taking a cruise every year,” he says. “You get a few miles offshore, and cellphones don’t work. That way, you can take a vacation and not have to worry about problems until you get back.”

Kathy McFarland, quality assurance specialist at Vanderbilt University Medical Center, makes it very clear in her voicemail message and email signature if she’s out of the office and when and how she will respond. And like Reeves, she has gotten strategic about the emails she will and won’t answer during off-hours.

“You have to try to stop the insanity somehow,” she says. “If it’s a focused question that I can answer quickly, I will respond, and that’s OK. When it’s a flurry because there are multiple people on a thread and everyone gets whipped up, I refuse to respond.”
Long hours

How many hours per week do you work on average?
Even with those coping strategies, she admits it’s hard to unplug. “You try to turn off when you can, but if the executive steering committee wants answers, they want them when they want them,” McFarland says. “They don’t care if it’s 5 p.m. on a Friday.”

Still, there are ways to draw the line, notes Allan Harris, a cloud architect at Partners HealthCare. While Harris regularly makes himself available during off-hours, he proactively makes sure people know how and where to seek help when he’s out of the office on planned time off with his family. More often than not, people respect his time, but there are the occasional situations where someone tracks him down on his cellphone.

“If I have an out-of-office message that specifies that someone else should be contacted, and someone calls me directly, I have a problem with that,” he says. The first thing he does is triage the problem, but he also sets boundaries. “The problem is most important, but I do let the customer know that we’ll address the situation when I come back to the office, where we’ll talk about SLAs and the proper escalation procedures,” he explains.

The embrace of the bring-your-own-device trend among IT pros definitely contributes to the increase in calls during off-hours, says Harris. “When you give out your personal cell number, it’s kind of like a Batphone — people think they can get a personal response.”
Taking the good with the bad

Despite the inconveniences, IT professionals say there is an upside to the 24/7 mentality. Because people are actively working at night, in the early mornings or on weekends, there is greater flexibility to step out during the workday to run errands or spend time with the kids, especially if you can work from home.

That’s how Scott Murray, business intelligence manager at Hospital Corporation of America (HCA), sees it. Murray, who has worked from home for six years, says he regularly emails or instant-messages with colleagues late at night or in the early morning hours, and he works some weekends to create reports tied to the monthly accounting cycle.

On the flip side, Murray coaches high school soccer and is out for practice from 3:45 to 5:30 p.m. every day during the season. “I feel like that’s OK because I’m available on weekends and after work,” he says. “If I were sitting in an office, there would be an expectation that I’d be there until 5 p.m. or later, and I couldn’t do the coaching.” Additionally, Murray doesn’t go totally dark. “I still answer the phone at soccer practice,” he says. “If something goes wrong, my boss knows he can reach me.”

Establishing trust and respect helps make the “always-on” culture work for both IT employees and management, says Cynthia Hamburger, CIO/COO at Learning Ally, a nonprofit dedicated to helping people with learning disabilities. Hamburger, who has been a CIO at larger companies, including Dun & Bradstreet, says it’s important to protect people’s personal time and publicly acknowledge them when they go beyond the call of duty. But respecting personal time doesn’t necessarily mean that weekends are off-limits.

“If you are on vacation with the family, unless the house is burning down, we will not contact you,” she says. But for those who aren’t taking paid time off, “there is an ‘always available’ mentality. It goes with an IT role and, unfortunately, the digitalization of the planet has made it worse,” she adds. “There is an expectation that most forms of contact are checked pretty regularly.”

While Hamburger says technology has made it easier for IT professionals to stay connected, she says the idea of 24/7 access is really nothing new, particularly among those interested in advancement. “People who have been the most successful in IT have had this work ethic all along,” she says. “The technology has just made us much more accessible in real time.”

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com



Microsoft created a virtual assistant, made Windows free on small devices, and brought back the Start button – but it’s still playing catch-up

This has been a big week for Microsoft, with a flood of new announcements and changes of direction. Along with its Build conference, new CEO Satya Nadella has made a number of moves designed to reverse the public perception that the company is an aging also ran in the technology races.

The changes include
Rolling out its new Cortana digital voice assistant
Announcing that Windows would be free to manufacturers of devices with small screens
Coming out with “universal” Windows technology that helps developers build apps that run on multiple versions of Microsoft’s operating system
Reviving the popular “Start” menu for Windows 8.1

Though some of those moves are more important than others, they’re all good things. Unfortunately, I don’t think they’ll be enough to solve Microsoft’s problem of being seen as your father’s technology vendor. Here’s why:

Consumers vs. IT
As noted above, Microsoft’s issues right now revolve around how the company is perceived by consumers, and it’s unlikely that these initiatives will be enough to change those perceptions. While all useful, none of them are truly new. Instead, they’re playing catch-up to existing products and services from Microsoft’s competitors, perhaps with incremental improvements, or acknowledgements that previous Microsoft strategies simply weren’t working out.

Technology professionals will welcome these changes, but the IT community isn’t where Microsoft’s problems lie. In my experience,, enterprise IT generally likes and trusts the company. Microsoft’s challenges lie in convincing fickle consumers that it’s as cool and innovative as Apple and Google. I can’t imagine these moves being exciting enough to do that.

Better, but not better enough
While initial reports suggest that Cortana is a credible or even superior alternative to Apple’s Siri and Google Now, the fact remains that other companies pioneered the voice assistant idea. Cortana would have to be light-years better than its already-in-place rivals to truly give Microsoft a significant advantage.

Similarly, making Windows free for mobile devices may help spark more device makers to adopt the platform, but it’s not like it will make an immediate difference to consumers. Besides, Android is already free to license. Once again, Microsoft is playing catch up.

Universal Windows app development may pay off with more app choices in the long run, but it’s a pretty geeky concept for most end users. Finally, bringing back the Start menu will ease the transition to Windows 8 for some holdouts, but let’s face it, the cool kids aren’t really interested in desktop Windows at this point.

Put it all together and you’ve got a collection of tweaks and that could change the substance of what Microsoft does, but won’t dent the way most people think of the company.

More, please!
Still, there’s a big ray of hope here. The fact that Microsoft was willing and able to make these changes could signal that more are on the way. If Microsoft can keep shaking things up and continue to show that things really are different now, eventually people will begin to notice and perhaps change their minds about the company. And then it truly won’t be your father’s Microsoft any more.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

New CEO Satya Nadella comes out swinging on ‘cloud first, mobile first’ strategy

As expected, Microsoft CEO Satya Nadella today hosted a press conference where the company unveiled Office for iPad, breaking with its past practice of protecting Windows by first launching software on its own operating system.

CEO Satya Nadella expounded on Microsoft’s ‘cloud first, mobile first’ strategy today as his company unveiled Office for iPad as proof of its new platform-agnosticism.

Three all-touch core apps — Word, Excel and PowerPoint — have been seeded to Apple’s App Store and are available now.

The sales model for the new apps is different than past Microsoft efforts. The Office apps can be used by anyone free of charge to view documents and present slideshows. But to create new content or documents, or edit existing ones, customers must have an active subscription to Office 365.

+ ALSO ON NETWORK WORLD Trial Microsoft software and services — for free +

Microsoft labeled it a “freemium” business model, the term used for free apps that generate revenue by in-app purchases.

Today’s announcement put an end to years of speculation about whether, and if so when, the company would trash its strategy of linking the suite with Windows in an effort to bolster the latter’s chances on tablets. It also reversed the path that ex-CEO Steve Ballmer laid out last October, when for the first time he acknowledged an edition for the iPad but said it would appear only after a true touch-enabled version had launched for Windows tablets.

It also marked the first time in memory that Microsoft dealt a major product to an OS rival of its own Windows.

“Microsoft is giving users what they want,” Carolina Milanesi, strategic insight director of Kantar Worldpanel ComTech, said in an interview, referring to long-made customer demands that they be able to run Office on any of the devices they owned, even those running a Windows rival OS. “The connection to Office 365 was also interesting in that this puts users within Microsoft’s ecosystem at some point.”

Prior to today, Microsoft had released minimalist editions of Office, dubbed “Office Mobile,” for the iPhone and Android smartphones in June and July 2013, respectively. Originally, the iPhone and Android Office Mobile apps required an Office 365 subscription; as of today, they were turned into free apps for home use, although an Office 365 plan is still needed for commercial use.

Talk of Office on the iPad first heated up in December 2011, when the now-defunct The Daily reported Microsoft was working on the suite, and added that the software would be priced at $10 per app. Two months later, the same publication claimed it had seen a prototype and that Office was only weeks from release.

That talk continued, on and off, for more than two years, but Microsoft stuck to its Windows-first strategy. Analysts who dissected Microsoft’s moves believed that the company refused to support the iPad in the hope that Office would jumpstart sales of Windows-powered tablets.

Office’s tie with Windows had been fiercely debated inside Microsoft, but until today, operating system-first advocates had won out. But slowing sales of Windows PCs — last year, the personal computer industry contracted by about 10% — and the continued struggles gaining meaningful ground in tablets pointed out the folly of that strategy, outsiders argued.

Some went so far as to call Windows-first a flop.

Microsoft has long hewed to that strategy: The desktop version of Office has always debuted on Windows, for example, with a refresh for Apple’s OS X arriving months or even more than a year later.

Microsoft today added free Word, Excel and PowerPoint apps for the iPad to the existing OneNote.

On his first day on the job, however, Nadella hinted at change when he said Microsoft’s mission was to be “cloud first, mobile first,” a signal, said analysts, that he understood the importance of pushing the company’s software and services onto as many platforms as possible.

Nadella elaborated on that today, saying that the “cloud first, mobile first” strategy will “drive everything we talk about today, and going forward. We will empower people to be productive and do more on all their devices. We will provide the applications and services that empower every user — that’s Job One.”

Like Office Mobile on iOS and Android, Office for iPad was tied to Microsoft’s software-by-subscription Office 365.

Although the new Word, Excel and PowerPoint apps can be used free of charge to view documents and spreadsheets, and present PowerPoint slideshows, they allow document creation and editing only if the user has an active Office 365 subscription. Those subscriptions range from the consumer-grade $70-per-year Office 365 Personal to a blizzard of business plans starting at $150 per user per year and climbing to $264 per user per year.

Moorhead applauded the licensing model. “It’s very simple. Unlike pages of requirements that I’m used to seeing from Microsoft to use their products, if you have Office 365, you can use Office for iPad. That’s it,” Moorhead said.

He also thought that the freemium approach to Office for iPad is the right move. “They’ve just pretty much guaranteed that if you’re presenting on an iPad you will be using their apps,” said Moorhead of PowerPoint.

Moorhead cited the fidelity claims made by Julie White, a general manager for the Office technical marketing team, who spent about half the event’s time demonstrating Office for iPad and other software, as another huge advantage for Microsoft. “They’re saying 100% document compatibility [with Office on other platforms], so you won’t have to convert a presentation to a PDF,” Moorhead added.

Document fidelity issues have plagued Office competitors for decades, and even the best of today’s alternatives cannot always display the exact formatting of an Office-generated document, spreadsheet or presentation.

Both Milanesi and Moorhead were also impressed by the strategy that Nadella outlined, which went beyond the immediate launch of Office for iPad.

“I think [Satya Nadella] did a great job today,” said Milanesi. “For the first time I actually see a strategy [emphasis in original].

“Clearly there’s more to come,” Milanesi said. “It was almost as if Office on iPad was not really that important, but they just wanted to get [its release] out of way so they could show that there’s more they bring to the plate.”

That “more” Milanesi referred to included talk by Nadella and White of new enterprise-grade, multiple-device management software, the Microsoft Enterprise Mobility Suite (EMS).

“With the management suite and Office 365 and single sign-on for developers, Microsoft is really doing something that others cannot do,” Milanesi said. “They made it clear that Microsoft wants to be [enterprises'] key partner going forward.”

Moorhead strongly agreed. “The extension of the devices and services strategy to pull together these disparate technologies, including mobile, managing those devices, authenticating users for services, is something Microsoft can win with. It’s a good strategy,” Moorhead said.

“This was the proof point of delivering on the devices and services strategy,” Moorhead concluded. “And that strategy is definitely paying off.”

Office for iPad can be downloaded from Apple’s App Store. The three apps range in size from 215MB (for PowerPoint) to 259MB (for Word), and require iOS 7 or later.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

You are employed as a network administrator at ABC.com. ABC.com has an Active Directory
domain named ABC.com. All servers on the ABC.com network have Windows Server 2012 installed.
ABC.com has a server, named ABC-SR07, which is configured as a DHCP server. You have
created a superscope on ABC-SR07.
Which of the following describes a reason for creating a superscope? (Choose all that apply.)

A. To support DHCP clients on a single physical network segment where multiple logical IP
networks are used.
B. To allow for the sending of network traffic to a group of endpointsdestination hosts.
C. To support remote DHCP clients located on the far side of DHCP and BOOTP relay agents.
D. To provide fault tolerance.

Answer: A,C


You are employed as a network administrator at ABC.com. ABC.com has an Active Directory
domain named ABC.com. All servers, including domain controllers, on the ABC.com network have
Windows Server 2012 installed.
ABC.com has a domain controller, named ABC-DC01, which is configured as a DNS server. You
are planning to unsign the ABC.com zone.
Why should you unsign the zone?

A. To remove the zone.
B. To change the current zone type.
C. To add a new primary zone.
D. To create an Active Directory-integrated zone.

Answer: B


You are employed as a network administrator at ABC.com. ABC.com has an Active Directory
domain named ABC.com. All servers on the ABC.com network have Windows Server 2012 installed.
ABC.com has a server named ABC-SR01, which hosts the IP Address Management (IPAM)
Server feature. ABC.com also has a server, named ABC-SR02, which is configured as a DHCP server.
You have been instructed to make sure that a user, named Mia Hamm, who belongs to the IPAM
Users group on ABC-SR01, has the ability to modify the DHCP scopes on ABC-SR02 by making
use of use IPAM. You want to achieve this without assigning Mia Hamm any unnecessary permissions.
Which of the following actions should you take?

A. You should consider making Mia Hamm a member of the DHCP Administrators group on ABCSR02.
B. You should consider making Mia Hamm a member of the IPAM Administrators group on ABCSR02.
C. You should consider making Mia Hamm a member of the Local Administrators group on ABCSR02.
D. You should consider making Mia Hamm a member of the Domain Administrators group.

Answer: A


MCTS Training, MCITP Trainnig

Best Microsoft MCSE Certification, Microsoft 70-412 Training at certkingdom.com


According to Cisco, which four improvements are the main benefits of the PPDIOO lifecycle
approach to network design? (Choose four.)

A. Faster ROI
B. Improved business agility
C. Increased network availability
D. Faster access to applications and services
E. Lower total cost of network ownership
F. Better implementation team engagement

Answer: B,C,D,E

The PPDIOO life cycle provides four main benefits:
+ It improves business agility by establishing business requirements and technology strategies.
+ It increases network availability by producing a sound network design and validating the network
+ It speeds access to applications and services by improving availability, reliability, security,
scalability, and performance.
+ It lowers the total cost of ownership by validating technology requirements and planning for
infrastructure changes and resource requirements.
(Reference: Cisco CCDA Official Exam Certification Guide, 3rd Edition) described in the link

Characterizing an existing network requires gathering as much information about the network as
possible. Which of these choices describes the preferred order for the information-gathering

A. Site and network audits, traffic analysis, existing documentation and organizational input
B. Existing documentation and organizational input, site and network audits, traffic analysis
C. Traffic analysis, existing documentation and organizational input, site and network audits
D. Site and network audits, existing documentation and organizational input, traffic analysis

Answer: B

This section describes the steps necessary to characterize the existing network infrastructure and
all sites. This process requires three steps:
Step 1. Gather existing documentation about the network, and query the organization to discover
additional information. Organization input, a network audit, and traffic analysis provide the key
information you need. (Note that existing documentation may be inaccurate.)
Step 2. Perform a network audit that adds detail to the description of the network. If
possible, use traffic-analysis information to augment organizational input when you are describing
the applications and protocols used in the network.
Step 3. Based on your network characterization, write a summary report that describes the health
of the network. With this information, you can propose hardware and software upgrades to support
the network requirements and the organizational requirements.

You want to gather as much detail as possible during a network audit with a minimal impact on the
network devices themselves.
Which tool would you use to include data time stamping across a large number of interfaces while
being customized according to each interface?

C. NetFlow
D. Cisco Discovery Protocol

Answer: C


Which three are considered as technical constraints when identifying network requirements?
(Choose three.)

A. Support for legacy applications
B. Bandwidth support for new applications
C. Limited budget allocation
D. Policy limitations
E. Limited support staff to complete assessment
F. Support for existing legacy equipment
G. Limited timeframe to implement

Answer: A,B,F

Network design might be constrained by parameters that limit the solution. Legacy applications
might still exist that must be supported going forward, and these applications might require a
legacy protocol that may limit a design. Technical constraints include the following:
Existing wiring does not support new technology.
Bandwidth might not support new applications.
The network must support exiting legacy equipment.
Legacy applications must be supported (application compatibility).

In which phase of PPDIOO are the network requirements identified?

A. Design
B. Plan
C. Prepare
D. Implement
E. Operate
F. Optimize

Answer: B


Plan Phase
The Plan phase identifies the network requirements based on goals, facilities, and user needs.
This phase characterizes sites and assesses the network, performs a gap analysis against bestpractice
architectures, and looks at the operational environment. A project plan is developed to
manage the tasks, responsible parties, milestones, and resources to do the design and
implementation. The project plan aligns with the scope, cost, and resource parameters established
with the original business requirements. This project plan is followed (and updated) during all
phases of the cycle.

MCTS Training, MCITP Trainnig

Best CCDA Training and 640-864 Certification
and more Cisco exams log in to Certkingdom.com


Scenario: A Citrix Engineer is configuring a new XenApp 6.5 farm in order to provide the Sales
department with access to a new CRM application. There are 400 users who will be accessing the
application, and the application load testing shows 512 MB of RAM utilization for each user during
peak time. XenApp will be installed on virtual machines, and the virtual machines will be hosted on
XenServer hosts.
All three of the XenServer hosts have the following hardware specifications:
1. Dual 6 core CPU
2. 96 GB of RAM
3. 300 GB SAN storage
The Citrix Engineer needs to ensure that users can access their XenApp resources in the event of
a server hardware failure.
Based on Citrix Best Practices, what would be the recommended configuration?

A. Create a pool with three hosts.
B. Create three pools with one host each.
C. Create a pool with two hosts and enable HA.
D. Create a pool with three hosts and enable HA.

Answer: D


Scenario: A Citrix Engineer needs to set up logging to monitor a Workload Balancing related issue
in a XenServer implementation. The engineer wants to capture maximum detail about this issue
before reporting it to Citrix Technical Support.
To increase the level of detail that will be captured in the log file, the engineer should _________
and __________. (Choose the two correct options to complete the sentence.)

A. open wlb.conf in a text editor
B. open logfile.log in a text editor
C. open auditlog.out in a text editor
D. modify the configuration options
E. enable logging for a specific trace

Answer: A,E


Scenario: Nether Tech has a XenDesktop farm with Windows 7 desktops. Users are accessing
their virtual desktops from different bandwidth and latency connection types.
Which setting should the engineer configure in a Citrix User policy in order to optimize moving

A. Enable Adaptive Display. Disable Progressive Display.
B. Disable Adaptive Display. Disable Progressive Display.
C. Enable Adaptive Display. Enable Progressive Display with Low Compression.
D. Disable Adaptive Display. Enable Progressive Display with Low Compression.

Answer: A


Scenario: Nether Tech’s corporate policy requires that passwords are NOT requested for XenApp
passthrough connections, except for those that pertain to members of the Nursing Users group.
Nurses connect to XenApp servers hosting applications in the Nurses Worker Group.
Click the Exhibit button to view a list of the policies configured in the environment.

An engineer needs to prioritize the three policies so that only members of the Nurses group are
prompted for passwords when they connect to their XenApp resources.
What is the correct order of prioritization for the policies from lowest to highest?

A. Unfiltered, Nurses, Corporate Users
B. Corporate Users, Nurses, Unfiltered
C. Unfiltered, Corporate Users, Nurses
D. Nurses, Unfiltered, Corporate Users

Answer: D


Scenario: Nether Tech recently upgraded to XenDesktop 5.5 and implemented a new VoIP
system. Virtual desktops have been integrated with the VoIP system. RTA (Real-time Audio) over
UDP has also been configured.
Which two steps should a Citrix Engineer take to optimize RTA/UDP traffic in the XenDesktop
implementation? (Choose two.)

A. Create a Citrix User policy.
B. Create a Citrix Computer policy.
C. Enable Multi-Stream in the policy.
D. Increase overall session bandwidth limit.
E. Set the audio redirection bandwidth limit in the policy.

Answer: B,C


MCTS Training, MCITP Trainnig

Best Citrix CCEECertification, 1Y0-A25 Exams Training at certkingdom.com

There are ways around it, but upgrading may be simpler, cheaper

When Microsoft stops supporting Windows XP next month businesses that have to comply with payment card industry (PCI) data security standards as well as health care and financial standards may find themselves out of compliance unless they call in some creative fixes, experts say.

Strictly interpreted, the PCI Security Standards Council requires that all software have the latest vendor-supplied security patches installed, so when Microsoft stops issuing security patches April 8, businesses processing credit cards on machines using XP should fall out of PCI compliance, says Dan Collins, president of 360advanced, which performs security audits for businesses.

But that black and white interpretation is tempered by provisions that allow for compensating controls – supplementary procedures and technology that helps make up for whatever vulnerabilities an unsupported operating system introduces, he says.

These can include monthly or quarterly reviews of overall security, use of software to monitor file integrity and rebooting each XP machine every day in order to restore it to a known safe state, says Mark Akins, CEO of 1st Secure IT, which also performs compliance audits. That safe state can be reset using a Microsoft tool called SteadyState that was built for XP but not later versions of Windows.

“Risk is the factor,” he says, and mitigating it is the goal, but the mitigations must reduce risk just as effectively as the original regulatory requirement that is not being met. To some extent that is a subjective call, and depending on the auditor businesses may have more or less flexibility in what compensating controls are deemed OK, says Akins.

Health Insurance Portability and Accountability Act (HIPAA) and Sarbanes-Oxley (SOX) financial regulations have provisions similar to those in the PCI standard, says Collins. In fact, PCI provisions are pretty much the baseline for the other two, which have some additional requirements tacked on, he says. So the issue goes well beyond businesses that handle credit cards.

These workarounds may sound good to businesses that haven’t upgraded to Windows 7 or 8/8.1 yet, Akins says, but it’s not likely to save any time, effort or money. “For IT it’s easier to upgrade to Windows 7 or 8 versus implementing file integrity monitoring and installing SteadyState,” he says.

Compensating controls can place a big load on IT departments because, for example, updating anti-virus software daily or constantly monitoring for file integrity or for evidence of intrusions, Collins says, isn’t simple. “It’s an arduous task,” he says.

“Compensating controls should be as short-term as possible,” and used only in order to keep key business applications running. Some legacy or proprietary business-critical software runs best or only runs on Windows XP, he says, and there are no feasible alternatives yet. “It’s a major issue if the software deployed is unstable on newer versions of Windows.”

That situation leaves a choice. The first option is to migrate from Windows XP or implement compensating controls. The second is buying replacement apps or rewriting old ones so they perform well on Windows 7 or 8/8.1. Another option businesses have is to pay Microsoft for extending XP support – also costly, but something that can buy time until a better solution is in place.

Some merchants that should comply with PCI could fly under the radar for a while without doing anything to address Windows XP non-compliance, he says. While it’s not advisable, they are not compelled to have security audits unless a merchant bank or credit processing service provider requires it – and that doesn’t happen all the time, Collins says.

PCI doesn’t require all businesses to meet the updated operating system requirement. If credit card data is collected by a business, encrypted using keys that are not in control of that business and passed off to a separate entity for processing and storage, the collecting business doesn’t have to comply with the requirement to a fully patched and supported operating system, Akins says.

Still, the best option is to upgrade, Collins says. “It’s difficult to envision a case where the cost of upgrading is greater than the cost of compensating controls,” he says.

Cisco CCNA Training, Cisco CCNA Certification

Best Isaca CRISC Certification, Isaca CISM Exams Training at certkingdom.com


How VMware wants to reinvent the SAN

Written by admin
March 14th, 2014

VMware is out with Virtual SAN today, which aims to virtualize the storage layer

VMware has released a virtual Storage Area Network (Virtual SAN), which the company says will usher in a new era of policy-driven and virtual machine-centric storage provisioning.

SANs are typically made of disparate storage components aggregated to create a pool that can be tapped by compute resources. Traditionally, SANs have been set up using external storage boxes which are then controlled by a switch; they’re ideal for dynamic storage needs.

VMware is taking a different approach for Virtual SAN, however. Instead of using external storage arrays that are pooled, Virtual SAN is a software-only product that runs on x86 servers that an enterprise may already have. It creates the shared storage pool out of the internal storage resources of the servers. This means Virtual SAN can be deployed as an overlay approach without the need to invest in new hardware.

Virtual SAN also takes a somewhat novel approach to provisioning the storage. Traditionally, SANs have worked by setting up Logical Unit Numbers (LUN) or other connections between the storage and the compute. Instead, Virtual SAN is integrated directly in with the kernel of VMware’s ESX hypervisor. That allows virtual machines to dictate how much storage they need and then the Virtual SAN software automatically provisions it.

Users set templates or policies related to how much storage their VMs can request, how fault tolerant the storage should be (and therefore how many copies of it there will be) and what sort of performance it requires (solid state versus hard drive). Then, when the VM is spun up, Virtual SAN automatically provisions the necessary storage within the parameters of the policies that have been established.

Simon Robinson, research vice president for storage at the 451 Research Group likes the idea. “Our research has been telling us for years that IT and storage managers are pretty tired of all the complexity involved in managing storage – managing LUNs, volumes, RAID levels, etc., and server virtualization makes it even more so,” he says. “For organizations that are well down the virtualization path, having a VM-centric way of managing their storage makes a lot of sense.”

Virtual SAN has been in development for three years and in beta for about a half year, since VMware announced it at VMWorld 2013. In that time 12,000 customers have signed up for the beta. Ryan Hoenle, director of the non-profit Doe Fund, is a VMware compute virtualization customer and has been testing Virtual SAN in its DR platform. “It’s really a no-brainer when the hypervisor you want to use also includes this virtualized storage,” he says. Virtual SAN allows the Doe Fund to have redundancy where Hoenle needs it and not pay for redundancy where he doesn’t. “We get that same sort of flexibility from a storage perspective that we gained from a compute perspective when we went to VMware.”

VMware isn’t alone in taking this policy-driven and hypervisor-integrated approach to a SAN. Robinson notes that there are a variety of startups doing this as well, but they take a slightly different approach. Companies like Nutanix and SimpliVity offer converged infrastructure systems which combine other features such as deduplication, compression and sophisticated snapshots into their platforms, for example. Some startups also enable multi-hypervisor support. But, one advantage to VMware’s Virtual SAN is that it is “baked in” with existing VMware tools. “Virtual SAN represents a major validation of this approach, and that will be good for all players,” Robinson says.

With Virtual SAN, VMware is finishing off the trifecta of its software defined data center (SDDC) strategy. The company is already clearly established in the compute virtualization market with a leading platform there. It bought Nicira and is working on its network virtualization strategy. Storage can be thought of as a last frontier for VMware to conquer, and Virtual SAN is a piece of that strategy.

VMware spokespeople say that they don’t expect Virtual SAN to replace an existing SAN or NAS (network attached storage); they see it as a complementary platform that is especially helpful for use cases such as disaster recovery, test and development, and virtual desktops. It’s generally available starting today, priced at $2,495 as stand-alone software.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification,
VMware Exams Training  and more exams log in to Certkingdom.com



CTP Certified Treasury Professional

Written by admin
March 12th, 2014

Which of the following are important uses of variance analysis in comparing actual cash flows with
projected cash flows?
I. Identifying unanticipated changes in inventory
II. Enhancing short-term investment income
III. Validating a capital budget
IV. Identifying delays in accounts receivable collections

A. I and II only
B. I and IV only
C. II and IV only
D. I, II, III, and IV

Answer: B


An instrument that gives the right to buy a stated number of shares of common stock at a specified
price is known as:

A. an equity warrant
B. a put option
C. a zero coupon bond
D. a subordinated debenture

Answer: A


A company plans to issue additional equity within the next 12 months but needs to issue debt at a
low interest rate now. Which of the following instruments would BEST meet this objective?

A. Convertible bonds
B. Private placement issue
C. Preferred stock
D. Subordinated debentures

Answer: A


An arrangement in which a borrower makes periodic payments to a separate custodial account
that is used to repay debt is known as a:

A. sinking fund
B. balloon payment
C. mortgage
D. zero-coupon bond

Answer: A


Which of the following instruments simplifies the paperwork connected with loans that have
multiple advance features?

A. Master note
B. Banker’s acceptance
C. Indenture agreement
D. Note purchase agreement

Answer: A


MCTS Training, MCITP Trainnig

Best AFP Certification, AFP CTP Training at certkingdom.com


Security comes first, with a premium on speed upgrading to a supported Microsoft operating system

CIOs who haven’t moved their companies from Windows XP by now ought to be fired, some people think, but those who haven’t and are still on the job have options for saving their bacon.

“Start,” is the first piece of advice from Shawn Allaway, CEO of Converter Technology, which specializes in migrating businesses to new versions of Windows and Microsoft Office. Even if the project isn’t completed before Microsoft ends support for XP on April 8, it’s important to minimize the window of exposure during which XP runs unsupported on corporate networks.

Those who haven’t started yet probably should be fired for leaving their businesses open to the impending threat, he says. “This is not like Microsoft dropped this on you six months ago,” he says. “You’re putting your organization at risk.”

That threat is that vulnerabilities discovered after April 8 will never be patched by Microsoft, leaving Windows XP open to an ever expanding range of attacks. In addition, many applications will no longer be supported when running on Windows XP, Gartner warns.
It’s possible and even desirable to sign a custom support contract with Microsoft that provides continued upgrades after the end-of-support date, but it is also expensive, says Directions on Microsoft. If that’s not possible, the main goal is to minimize risks caused by using unsupported XP, which means a review and possible beefing up of security.

Isolating XP machines on corporate networks and limiting what devices they can communicate with is essential, and there are tools for this. For instance Unisys Stealth can limit a machine’s access to other machines and hide it from attackers, says Unisys CIO Dave Frymier. A Stealth shim in the IP stack of XP machines sits between the link and network layers to decrypt IP payloads if it can and drops packets when it can’t. A machine can talk to another only if it is a member of the same community of interest as defined by Active Directory, he says.

Migrating isn’t a quick process, and the larger the network, the longer it takes. The rule of thumb is that for a 10,000-desktop network with 15 offices, it will take two to three months to complete the project, Allaway says.

A first step toward the transition is testing application compatibility with a newer operating system, getting new licensing agreements and assessing the need for and buying new hardware.

Like any OS rollout, this one will be done in phases. Organizations that think they’ll miss the deadline should prioritize their applications and users and migrate the most important and most vulnerable first to reduce the risks, Gartner says.

Some of the preparatory steps can be sped up using tools. For example ChangeBase and AppDNA can help determine whether business apps are compatible with newer OSs. If not businesses may need to buy newer versions that are or in the case of custom software, recoding it, Allaway says.

Microsoft is offering a free and now unsupported version of Laplink’s PCmover Express for Windows XP to transfer files from XP machines to machines with newer operating systems. PCmover Professional ($60) also moves applications, if that’s called for.

Allaway says it’s a good time to rid the network of deadware – rogue apps installed by end users or corporate apps that are no longer used – that have avoided detection during housecleaning over the years. “There’s a sense of urgency [about the XP migration] but clean a little junk out of your network if you can,” Allaway says. Those who have waited a decade to upgrade the operating system may have let this slide.

If an apps inventory is long overdue, it is also a good time to check whether apps licenses are in synch with the number of workers actually using the software. Restructuring license agreements may produce cost savings, he says.

PC upgrades may be needed to support a new operating system, but hardware needs may go beyond that. Old printers may lack drivers for Windows 7 or Windows 8, and there may be some machines such as faxes that may not be necessary at all anymore, he says.

Like any desktop refresh project moving to Windows 7 or Windows 8/8.1 requires someone in charge, either in-house or a consultant, a plan for a phased rollout and personnel to help resolve the inevitable issues that will arise after the rollout. “Don’t resource-starve the project,” Allaway says. “It ultimately costs more and takes longer.”

One thing to remember is that on April 8. Windows XP will keep chugging along, but the risk of being successfully attacked goes up more and more after that. “It’s not Y2K where come April it’s not going to work,” he says.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Another executive shakeup at Microsoft

Written by admin
March 4th, 2014

Rumor: Biz development head Bates, marketing chief Reller call it quits

Just a month after Satya Nadella took over as Microsoft CEO the executive inner circle is being overhauled, with two key leaders leaving the company and a third assuming significant new power.

Executive vice presidents Tony Bates, the former CEO of Skype, and Tami Reller, who cut her teeth on Windows, are leaving the company, according to a post by Kara Swisher on re/code.

Bates had reportedly been a top contender for CEO and was serving as head of business development and evangelism. Reller was head of marketing.

Bates’ job will be filled temporarily by Executive Vice President Eric Rudder, who is in charge of advanced strategy, according to the report.

Reller’s job is being expanded and filled by Chris Capossa, a Microsoft marketing executive who will now be executive vice president of both marketing and advertising, the report says.

Both Bates and Reller were in ambiguous jobs under a reorganization put in place last year by outgoing CEO Steve Ballmer.

Reller was named executive vice president of marketing under that new management scheme, but Reller essentially had to share the job with Mark Penn, another executive vice president, who “will take a broad view of marketing strategy and will lead with Tami the newly centralized advertising and media functions.”

Similarly, Bates had uncertain duties and power in dealing with manufacturing partners. Under the Ballmer reorganization, “OEM will remain in [the sales marketing and services group] with Kevin Turner with a dotted line to Tony who will work closely with Nick Parker on key OEM relationships.” At best he had fragmented authority.

Bates came onboard at Microsoft when the company bought Skype for $8.5 billion in 2011. Reller was brought into Microsoft when it bought Great Plains Software in 2001. Earlier she was both the chief financial officer and the chief marketing officer for Microsoft’s Windows division, which was moved into the operating systems division under Ballmer’s reorganization. She assumed her role as executive vice president when Ballmer reorganized.

News of this latest shakeup comes just a week after Nadella cleared room at the top for Stephen Elop, the former CEO of Nokia who is joining the company as an executive vice president in charge of devices and studios when Microsoft’s purchase of Nokia is finalized.

That means the current occupant of the slot, Julie Larson-Green, will move over and down to the newly created position of chief experience officer (CXO) in which she will report to another executive vice president Qi Lu, who is in charge of applications (Office, SharePoint, Yammer, Lync, Skype) and services (Bing and MSN).

According to an email Larson-Green sent to her staff and published by Mary Jo Foley in her All About Microsoft blog Elop is scheduled to step into his new role immediately once Microsoft’s purchase of Nokia’s phone business is complete. Meanwhile, Larson-Green will continue her current role.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com



Twitter faces growing pressure to attract new users and dramatically increase engagement on the platform. Can it ever rival the numbers and growth of Facebook?

Twitter’s honeymoon as a publicly traded company could be coming to an end. With growth stalling and timeline views on the decline for the first time ever, Twitter finds itself at a crossroads.

Twitter Suffers from Growing Pains
While its quest for more ad revenue continues unabated, the company faces even greater pressure to attract new users and dramatically increase engagement on the platform.

“Twitter has seen its sequential MAU growth rate decelerate sharply after hitting 50 million, raising concerns that its quirkier nature might cap its potential audience in the U.S. at a ceiling well below that of Facebook.”
– Seth Shafer, SNL Kagan

“We as a company aren’t going to be satisfied — I am not going to be satisfied — until we reach every connected person on the planet, period,” CEO Dick Costolo said at last week’s Goldman Sachs Technology and Internet Conference.

The challenge ahead for Twitter coupled with Costolo’s grandiose goal puts the company in a predicament unlike any it has confronted before. It also fans the unfortunate, yet inevitable comparison to Facebook. While Twitter ended 2013 with an average monthly active user (MAU) base of 241 million, Facebook surpassed 1.23 billion. For every user that engages on Twitter, at least five are actively using Facebook.

“Twitter needs to do something to grow to the size of Facebook but the jury is still out if there’s a clear path for Twitter or any other company to do that, or if Facebook is a once-in-a-lifetime anomaly that was in the right place at the right time,” says Seth Shafer, associate analyst at SNL Kagan.

Costolo hasn’t helped matters by failing to meet previous internal estimates for growth either. Early last year the executive reportedly told employees that he expected to reach 400 million MAUs by the end of 2013. Failing to double its active user base last year, Twitter reported a 30 percent increase in its stead.

“Twitter’s overall MAU growth is still pretty healthy, but it’s all coming internationally where users monetize at a much lower rate. U.S. growth has slowed significantly at about 50 million MAUs,” says Shafer.

Facebook blew past 50 million U.S. MAUs without blinking and moreover, its sequential increases didn’t dip into the single digits until it surpassed about 120 million users in the U.S., according to SNL Kagan data.

“Twitter, however, has seen its own sequential MAU growth rate decelerate sharply after hitting 50 million MAUs, raising concerns that its quirkier nature and niche focus might cap its potential audience in the United States at a ceiling well below that of Facebook,” Shafer adds.

Twitter’s ‘Road Map’ for Growth
Nonetheless, Twitter’s lead executive says he is optimistic about rising user growth. While the company is being careful not to make specific promises or announcements about how it will improve on these points, Costolo has frequently referenced a road map of late that lays out a strategy for achieving better growth over the course of the year.

Pointing to field research and internal data on how users engage with the platform, he hints at a series of new features and design changes that are expected to drive new user growth. Twitter’s vault of data and newfound capability to experiment with multiple beta tests simultaneously has “informed a very specific road map for the kinds of capabilities we want to introduce to the product that we believe will drive user growth,” says Costolo.

He is quick to point out, however, that no single product feature or change to the platform will lead to a “quantum leap change in growth.” Instead it will be an accumulation of numerous tweaks throughout the year that give him confidence. “You’re going to be seeing a significant amount of experimentation of different ideas we have,” he says.

While dispelling concerns about lagging growth in the recently closed quarter, Costolo says there was no specific event or trend during the quarter that meaningfully impacts how the company thinks about user growth. Indeed, improvements made during the finals months of 2013, particularly in messaging and discovery, have already paid off. Favorites and retweets rose 35 percent from the previous quarter and direct messages jumped 25 percent over the same period, according to Twitter.

“I’m starting to see those interactions do what we hoped they would do,” he says. “It’s more about pushing the content forward and pushing back the scaffolding of Twitter.”

The company also hopes to attract new users by simplifying its on-boarding process and dramatically reducing the 11 steps a new account currently requires.

Under the Shadow of Facebook
Twitter has successfully maneuvered through its fair share of challenges before. Be it the fail whale sightings and power struggles of its early days or the feverish hunt for ad revenue of late, the company has found its way.

But now with its first complete quarter as a public company in the rear view, the demands for growth from investors will only get louder with each passing quarter. Twitter will have to deliver some big numbers in 2014 to keep Wall Street happy, but Costolo’s comments also suggest that much of that success will depend on a clear differentiation between Twitter’s role in the world and that of Facebook.

“Twitter is this indispensable companion to life in the moment,” Costolo says. “If you think about it as a product, I think that misses the impact and the reach of what we really believe is a content, communications and data platform.”

By that distinction, the opportunities afforded to Twitter are “enormous,” says Costolo. “We believe we are the only platform where you get an understanding of wide reach in the moment while it’s happening.”

Tapping into big data and personalization could help, but it won’t move the needle far enough for Twitter to reach the scale of Facebook, says Shafer of SNL Kagan.

Emerging from under the shadow of Facebook will be a struggle for Twitter unless it makes dramatic changes to the service or goes on an acquisition binge aimed at cobbling something larger together, he adds. And even that would be a challenge because of course, “we already have a pretty big thing like that called Facebook,” Shafer says.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com


7 Technology Job Boards to Find or Fill Positions

Written by admin
February 27th, 2014

Job boards are an important element of a well-rounded job search for workers or employee search for recruiters and hiring managers, but with so many options where should you start? This guide is a good place. We look at job boards that focus squarely on technology and IT jobs.

Technology Job Boards
Job hunting is hard work. It’s either your full-time job or you’re finding time after working a full day and on weekends. It’s important to prioritize and use a multi-pronged attack. Job boards are one part of that equation.

For employers, good IT professionals are hard to find and ones that are specialized are even harder to find. Niche IT and technology job boards can help you find skilled talent in less time. So whether you’re looking for a new job or a new employee, niche technology job boards can help you find top talent and job offers that you may not find in other places.

CIO.com’s IT Jobs
If you’re a CIO or senior IT executive and in the market for you next gig, CIO.com’s IT jobs board is good place to start your search. (Before you start thinking about bias the slides are in alphabetical order.)

This job board specializes in tech jobs at the highest levels. At the time of this article, there were more than 1,000 listings for IT pros and executives.
Listing Cost:
$295 for 60 days

Crunch Board
IT job seekers would do well to check out TechCrunch’s CrunchBoard. This niche site offers technology-related job listings as well as editorial from the TechCrunch Network.
Listing Cost:
$200 – One Job Posting (30 days)
$895 – 5 Pack of Job Postings
$1495 – 10 Pack of Job Postings

Dice is one of the best-known tech-centric job boards around. Communities within Dice are specialized to skills or interests, which can make it easy for an employer to find specialized job candidates. It has a variety of listings from straight-up job posts to full-service recruiting packages.
Listing Cost:
Job Posting Express option starts at $395 for 30 days

If analytics and big data are where your skills lie then iCrunchData may be just what you’re looking for. Here you’ll find tech-centric jobs that focus specifically on big data, analytics and also tech jobs in general.
Listing Cost:
iCrunchData offers job posting credits starting at $375 for one credit with a price break for each additional credit. The credits don’t expire and if those prices don’t work you can bid your own. Aside from that it offers an unlimited job packages for hiring managers starting at $595.

IT Job Pro is a portal that, as the title implies, delivers technology job listings to job seekers. It offers jobs in the U.S., Europe, Asia, Australia and New Zealand.
Listing Cost:
$120 per job

Venture Beat
VentureBeat’s job board offers another place for IT job seekers to look for opportunities. It has a small database of jobs that are mainly IT or technology-based.
Listing Cost:
$99 for 30 Days

We Work Remotely
As the name implies, We Work Remotely, focuses on jobs that allow employees to telecommute. While not technically a technology board, the nature of the telecommuting angle lends itself well to the IT job market. A simple glance at the home page shows listings dominated by IT and development positions.
Listing Cost:
$200 for 30 Days

This list is by no means exhaustive. There are many other job boards out there (e.g., LinkedIn, Indeed, CareerBuilder or Monster) that cover a wide spectrum of jobs. Which sites did you use to find your last IT job?

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com


Gigabit Internet Service Providers Challenge Traditional ISPs

Last fall, the New America Foundation’s Open Technology Institute published a study examining high-speed Internet prices around the world. Compared to its international neighbors, the bulk of the United States pays higher prices for slower services than the majority of the planet.

Internet access itself is only part of that cost. Comcast and AT&T, for example, charge monthly fees for modems, routers and additional wiring (along with cable boxes, remotes and recording equipment for cable customers), which almost doubles the advertised monthly cost. Internet service providers can do that because, apart from a handful of spots in America, the competition is severely limited, if not nonexistent.

That could change, experts say – and Google’s high-speed, low cost, gigabit Internet service deserves the credit.

Google Fiber By the Numbers: Better, Faster, Cheaper
Google Fiber costs $70 per month, or $120 per month for an Internet/TV bundle, with installation fees up to $30 installation. Google says it’s 100 times faster than the average cable Internet connection. Compared to what other services list on their websites, Google Fiber is 22 times faster than AT&T’s best offering, 10 times faster than Comcast’s and 3.3 times faster than Verizon’s top choice – and it costs 24 times less than AT&T, 15 times less than Comcast and 10 times less than Verizon.

Google Fiber is even cheaper than ISPs’ slowest connection options. Comcast’s slowest plan, at 6 Mbps, costs $8.33 per megabit, while AT&T’s comparable package is $7.67 per megabit! Google’s gigabit plan is 167 times faster but cost only $.07 per megabit.

What’s more, Comcast Executive Vice President David L. Cohen has argued that Americans don’t need high speed Internet because they can’t handle it. (Time Warner agrees.) According to Cohen, even if Comcast could deliver gigabit service like Google Fiber’s 1 Gbps), most customers couldn’t access those speeds because of insufficient equipment. (He neglected to mention that Google provides its high-speed compatible equipment to all customers at no additional costs, along with a free Nexus 7 tablet, unlike the additional monthly fees Comcast charges for its equipment.)

Forrester communications and networking analyst Dan Bieler says Google Fiber increases Google’s leverage in negotiations with carriers regarding connectivity provisioning. Clearly, the carriers and cable providers want to retain a major role in the connectivity provisioning. If Google builds its own networks to the home and business users, carriers risk losing customers to Google.

“Google Fiber has forced the competition to take a closer look at the need to roll out ‘real’ broadband at a reasonable price,” Bieler says. This will happen in areas with “high purchasing power and a high business density, but it’s less likely in rural areas, where fiber investments aren’t always as easy to justify. “Competition for fiber will increase,” Bieler says, “but not everywhere.”

Ian Keene, research analyst and vice president at Gartner, agrees: “High bandwidths of 100 Mbps and above will only be available in the large cities for the foreseeable future.”

Telecommunications firms and cable multiple-system operators (MSOs) are competing to get fiber closer to subscribers, Keene says. Telcos have mixed feelings about fiber into the home. Some bring fiber closer, using existing copper to provide broadband, since the emerging G.fast copper standard can deliver 500 Mbps services. Others swallowing the capital expenses needed to install new cables and equipment in the home. Finally, along with competition, government broadband initiatives are driving improved services, he says

Gigabit Internet Arriving, Slowly But Surely
Google Fiber – installed throughout Kansas City, Kan. and Kansas City, Mo., with Austin, Texas and Provo, Utah next on the list – isn’t the only gigabit Internet provider in the United States. The ranks vary, too, from ISPs to electric companies to municipal governments, all offering services for a fraction of the cost of cable. This suggests that competition is coming from all corners.

Chattanooga, Tenn., can thank its electric company, EPB, for its 9-county service area. EPB needed its systems to monitor and communicate with new digital equipment – but the nation’s biggest phone and cable companies said they couldn’t do it for another decade or more. So EPB became the sole ISP for Chattanooga, also referred to as Gig City, and now manages 8,000 miles of fiber for 56,000 commercial and residential Internet customers. The service costs about $70 a month (compared to $300 a month before EPB stepped in).

In addition, the Vermont Telephone Co. has brought gigabit Internet to Burlington, the state’s largest city, and Springfield, the town where it’s headquartered. CTO Justin M. Robinson says “it’s certainly not without concern” being among a handful of companies providing gigabit Internet, “but we like to think what we are doing on a small scale here in Vermont could be replicated in a thousand different places across the country or, perhaps, even expanded to become a nationwide goal.”

Vermont Telephone’s gigabit Internet rollout is part of a larger project, funded in part by the federal Broadband Initiatives Program, that’s also upgrading the state’s voice telephone switch, adding an IPTV video head-end and deploying a 4G/LTE wireless network to most of the state, Robinson says.

According to Robinson, the goal is to build fiber to all 17,500 Vermont Telephone customers. Approximately 3,500 homes and businesses have been converted so far, with broadband penetration for those converted exceeding 80 percent. The IPTV video service, built using the former Microsoft Media Room platform, which Ericsson recently acquired, is in a trial phase.

One of the most compelling reasons for the gigabit Internet rollout, Robinson says, was the realization that significantly higher throughput has only a minor effect on total usage but still improves customers’ experience.

“They can access data more quickly and perform multiple tasks at once,” Robinson says. “My wife can watch a movie on Netflix and browse Reddit while, at the same time, I remotely connect to the office … listen to streaming music from Pandora and download the latest [game] from Steam in the background.

“At GigE speeds,” Robinson continues, “the worry about bandwidth disappears. The bandwidth is always available and waiting for the customer, instead of the customer waiting for the bandwidth.”

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com



Microsoft Lync to play nice with Cisco, Android

Written by admin
February 19th, 2014

A number of common IT projects that seem like they should add value rarely do. Here are what I consider the top IT projects that waste budget dollars.

The role of the CIO has changed more in the past five years than any other position in the business world. Success for the CIO used to be based on bits and bytes, and is now measured by business metrics. Today’s CIO needs to think of IT more strategically and focus on projects that lower cost, improve productivity, or both, ideally.

However, many IT projects seem to be a waste of time and money. It’s certainly not intentional, but a number of projects that seem like they should add value rarely do. Here are what I consider the top IT projects that waste budget dollars.

Over provisioning or adding more bandwidth
Managing the performance of applications that are highly network-dependent has always been a challenge. If applications are performing poorly, the easy thing to do is just add more bandwidth. Seems logical. However, bandwidth is rarely actually the problem, and the net result is usually a more expensive network with the same performance problems. Instead of adding bandwidth, network managers should analyze the traffic and optimize the network for the bandwidth-intensive applications.

Investing in fault management tools
On paper, it makes sense to invest in fault management. You deploy network devices, servers, security products, and other infrastructure, so of course you would want to know when devices are up and down. However, the fact is that today we build our network so redundant that the loss of any single device has little impact on the performance of applications. Also, most of the fault management tools have a big blind spot when it comes to virtual resources, as the tools were designed to monitor physical infrastructure. IT organizations should focus on performance solutions that can isolate what’s been “too wrong for too long” to solve those nagging “brown outs” that cause user frustration.

Focusing IT energy only on the “top talkers”
When I talk to IT leaders about new initiatives, it seems much of the focus is on the top 5 or 10 applications, which makes some sense conceptually as these are the apps that the majority of workers use. Instead, IT leaders should monitor all applications and correlate usage to business outcomes to determine and refine best practices. For example, a successful branch office could be heavy users of LinkedIn, Salesforce.com and Twitter. In aggregate, these might not be among the company’s top 10 applications, and the usage would fly under the radar. If organizations could monitor applications and then link consistent success to specific usage patterns, unknown best practices can be discovered and mapped across the entire user population.

Using mean time to repair (MTTR) to measure IT resolution success
ZK Research studies have revealed a few interesting data points when it comes to solving issues. First, 75% of problems are actually identified by the end user instead of the IT department. Also, 90% of the time taken to solve problems is actually spent identifying where the problem is. This is one of the reasons I’m a big fan of tools that can separate application and network visibility to laser in on where exactly a problem is. This minimizes “resolution ping pong,” where trouble tickets are bounced around IT groups, and enables IT to start fixing the problem faster. If you want to cut the MTTR, focus on identification instead of repair, as that will provide the best bang for the buck.

Managing capacity reactively
Most organizations increase the capacity of servers, storage or the network in a reactive mode. Don’t get me wrong, I know most companies try to be proactive. However, without granular visibility, “proactive” often refers to reacting to the first sign of problems, but that’s often too late. Instead, IT departments should understand how to establish baselines and monitor how applications deviate from the norm to predict when a problem is going to occur. For example, a baseline could be established to understand the “normal” performance of a business application. Over four successive months, the trend could be a slight degrade of the application’s performance month after month. No users are complaining yet, but the trend is clear, and if nothing is done, there will be user problems. Based on this, IT can make appropriate changes to the infrastructure to ensure users aren’t impacted.

The IT environment continues to get more complex as we make things more virtual, cloud-driven or mobile. It’s time for IT to rethink the way it operates and leverage the network to provide the necessary visibility to stop wasting money on the things that don’t matter and start focusing on issues that do.

Cisco CCNA Training, Cisco CCNA Certification

Best CCNA Training and CCNA Certification and more Cisco exams log in to Certkingdom.com



So Long IT Specialist, Hello Full-Stack Engineer

Written by admin
February 17th, 2014

At GE Capital, the business is focused not simply on providing financial services to mid-market companies but also selling the company’s industrial expertise. They might help franchisees figure how to reduce power consumption or aid aircraft companies with their operational problems. “It’s our big differentiator,” says GE Capital CTO Eric Reed. “It makes us sticky.”

And within IT, Reed is looking not for the best Java programmer in the world or an ace C# developer. He wants IT professionals who know about the network and DevOps, business logic and user experience, coding and APIs.

IT Specialist Out, Full-Stack Engineer In
It’s a shift for the IT group prompted by an exponential increase in the pace of business and technology change. “The market is changing so much faster than it was just two or five or, certainly, 10 years ago,” says Reed. “That changes the way we think about delivering solutions to the business and how we invest in the near- and long-term. We have to think about how we move quickly. How we try things and iterate fast.”

But agility is a tall order when supporting a $44.1 billion company with more than 60,000 employees in 65 countries around the world. “There are several markets we play in, and we can’t be big and slow,” says Reed. “But the question is how to we make ourselves agile as a company our size.”

Like many traditional IT organizations, GE Captial had one group that developed and managed applications and another that designed and managed infrastructure. Over time, both groups had done a great deal of outsourcing. It wasn’t an organizational structure designed for speed.

An engineer by training, Reed saw an opportunity to apply the new product introduction (NPI) process developed at GE a couple of decades ago to the world of IT development. Years ago, a GE engineer might split his or her time between supporting a plant, providing customer service, and developing a new product. With NPI, we turned that on its ear and said you’re going to focus only on this new product,” explains Reed. “You take people with different areas of expertise and you give them one focus.”

That’s what Reed did with IT. “We take folks that might do five different things in the course of the day and focus them on one task — with the added twist being that you can’t be someone who just writes code,” says Reed.

A New Type of IT Team Forms
Last year, Reed pulled together the first such team to develop a mobile fleet management system for GE Capital’s Nordic region. He assembled a diverse group of 20, who had previously specialized in networking, computing, storage, application, or middleware, to work together virtually. He convinced all of the company’s CIOs to share their employees. They remained in their initial locations with their existing reporting relationships, but for six months all of their other duties were stripped away. “The CIOs had to get their heads around that,” Reed says

The team was given some quick training in automation and given three tasks: develop the application quickly, figure out how to automate the infrastructure, and figure out how to automate more of the application deployment and testing in order to marry DevOps with continuous application delivery.

There were no rules — or roles. “We threw them together and said, ‘You figure it out,’” Reed recalls. “We found some people knew a lot more than their roles indicated, and the lines began blurring between responsibilities.” Some folks were strong in certain areas and shared their expertise with others. Traditional infrastructure professionals had some middleware and coding understanding. “They didn’t have to be experts in everything, but they had a working knowledge,” Reed says.

The biggest challenge was learning to be comfortable with making mistakes. “GE has built a reputation around execution,” says Reed. “My boss [global CIO of GE Capital] and I had to figure out how to foster an environment were people take risks even though it might not work out.”

Project Success
The project not only proceeded quickly — the application was delivered within several months — it established some new IT processes. They increased the amount of automation possible not only at the infrastructure level, but within the application layer at well. They also aimed for 60 to 70 percent reusability in developing the application, creating “lego-like” building blocks that can be recycled for future projects.

Business customers welcomed the new approach. In the past, “they would shoehorn as many requirements into the initial spec as possible because they didn’t know when they’d ever have the chance again,” says Reed. “Now it’s a more agile process.” The team launches a minimum viable solution and delivers new features over time.

For IT, “it was a radical change in thinking,” says Reed. “We’ve operated the same way literally for decades. There were moments of sheer terror.” And it wasn’t for everyone. Some opted out of the project and went back to their day jobs.

But Reed is eager to apply the process to future projects and rethink the way some legacy systems are built and managed. “We had talked about services-oriented architecture, and now we have something tangible that shows it can be done,” Reed says. “On the legacy side, we have to decide if we want to automate more of that infrastructure and keep application development the old way or invest in this.”

Some employees remained with the fleet management app team. Others started a new project. And a few went back to their original roles. “We’re trying to make disciples so more people can learn about this process,” Reed says.

Reed can envision the IT organization changing eventually. “What we look for in people when we hire them will change. There were years when we went out in search of very technical people. Then there were years of outsourcing where we sought people who could manage vendors and projects,” Reed says. “Now we need both, and we need to figure out how to keep them incentivized.”

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com



7 Reasons Not to Use Open Source Software

Written by admin
February 13th, 2014

Talk to an open source evangelist and chances are he or she will tell you that software developed using the open source model is the only way to go.

The benefits of open source software are many, varied and, by now, well-known. It’s free to use. You can customize it as much as you want. Having many sets of eyes on the source code means security problems can be spotted quickly. Anyone can fix bugs; you’re not reliant on a vendor. You’re not locked in to proprietary standards. Finally, you’re not left with an orphaned product if the vendor goes out of business or simply decides that the product is no longer profitable.

However, the open-source evangelist probably won’t tell you that, despite all these very real benefits, there are times when using closed-sourced, proprietary software actually makes far more business sense.

Here are some of the circumstances when old-fashioned proprietary products are a better business choice than open source software.

1. When It’s Easier for Unskilled Users
Linux has made a huge impact on the server market, but the same can’t be said for the desktop market – and for good reason. Despite making strides in the last several years, it’s still tricky for the uninitiated to use, and the user interfaces of the various distributions remain far inferior to those of Windows or Mac OS X.

While Linux very well may be technically superior to these proprietary operating systems, its weaknesses mean that most users will find it more difficult and less appealing to work with. That means lower productivity, which will likely cost far more than purchasing a proprietary operating system with which your staff is familiar.

2. When It’s the De Facto Standard
Most knowledge workers are familiar with, and use, Microsoft Word and Excel. Even though there are some excellent open source alternatives to Office, such as LibreOffice and Apache OpenOffice, they aren’t identical in terms of functionality or user interface, performance, plugins and APIs for integration with third-party products. They are probably close enough as much as 90 percent of the time, but on rare occasions there’s a risk that these differences will cause problems – especially when exchanging documents with suppliers or customers.

It also makes sense to use proprietary software in specialist fields where vendors are likely to have gone into universities and trained students on their software. “The software may not necessarily be better, but it may be selected by a university before an open source solution gets a big enough community around it,” says Chris Mattman, an Apache Software Foundation member and a senior computer scientist at the NASA Jet Propulsion Laboratory.

“When that happens, the students will then know the software better and be more productive with it,” Mattman says. When the students then move into a business environment, it makes sense for them to continue with the software they are used to.

3. When Proprietary Software Offers Better Support
Business-class support is sometimes available for open source software, either from the company leading the project or a separate third-party. This isn’t the case often, though – and that can be a problem, according to Tony Wasserman, professor of software management practice at Carnegie Mellon University.

“Some customers prefer to have someone outside the company to call for product support on a 24/7 basis and are willing to pay for a service level agreement that will provide a timely response,” he says. “People often respond very quickly to queries posted on the forum pages of widely-used open source projects, but that’s not the same thing as a guaranteed vendor response in response to a toll-free telephone call.”

4. When You Want Software as a Service
Cloud software is slightly different than conventional software. As a general rule, you don’t get access to the source code, even if the hosted software is built entirely on open source software. That may not make the software proprietary, strictly speaking, but it doesn’t give you all the benefits of open source. In that sense, the benefits of using the “pay for what you use” software as a service model may outweigh the disadvantage of not having access to the source code.

5. When Proprietary Software Works Better With Your Hardware
Many types of proprietary hardware require specialized drivers; these are often closed source and available only from the equipment manufacturer. Even when an open source driver exists, it may not be the best choice. “Open source developers may not be able to ‘see’ the hardware, so the proprietary driver may well work better,” Mattman says.

6. When Warranties and Liability Indemnity Matter
Some open source software companies, such as Red Hat, are structured to look like proprietary software vendors. They accordingly offer warranties and liability indemnity for their products, just like proprietary vendors do. “These companies are exactly the same as proprietary software companies, except that they won’t take you out to play golf,” Wasserman says.

For every Red Hat, though, there are many open source projects that aren’t backed by a commercial organization. While you may get warranties and liability from a third-party, in many cases you won’t. If that doesn’t suit you or your company’s software procurement policies, then you’re advised to find a proprietary vendor.

7. When You Need a Vendor That Will Stick Around
Yes, there’s no guarantee that a commercial software vendor will stick with a product if demand drops to such an extent that it’s no longer profitable to develop it. The company itself may even go out of business. But if an open source project is small, there’s also a danger that the person behind it may lose interest. If that happens, it may not be easy to find another open source developer to step in.

(This may be more of an argument against small open source projects than an argument for proprietary software – but at least you can look into the books of large software companies and make an informed decision as to whether they’re likely to be around in a few years to honor any commitments they give you.)

Don’t Be Too Dogmatic About Open Source Software

The lesson here: While open source software may often – and even usually – be a better choice than functionally similar proprietary offerings, it doesn’t make sense to be too dogmatic about it.

“As a practical matter, I think that many people would prefer to have everything open, especially in light of the recent revelation about the NSA spying on machines through USB chips,” Wasserman says. At the same time, though, many of those who prefer open source will make exceptions when there are no practical alternatives – not to mention their use of Mac and iOS devices … ”

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com



IT inferno: The nine circles of IT hell

Written by admin
February 8th, 2014

IT inferno: The nine circles of IT hell
The tech inferno is not buried deep within the earth — it’s just down the hall. Let’s take a tour

Spend enough time in the tech industry, and you’ll eventually find yourself in IT hell — one not unlike the underworld described by Dante in his “Divine Comedy.”

But here, in the data centers, conference rooms, and cubicles, the IT version of this inferno is no allegory. It is a very real test of every IT pro’s sanity and soul.

[ Bring peace to your IT department by avoiding IT turf wars. | Find out which of our eight classic IT personality types best suit your temperament by taking the InfoWorld IT personality type quiz. | Get a $50 American Express gift cheque if we publish your tech tale from the trenches. Send it to offtherecord@infoworld.com. ]

How many of us have been abandoned by our vendors to IT limbo, only to find ourselves falling victim to app dev anger when in-house developers are asked to pick up the slack? How often has stakeholder gluttony or lust for the latest and greatest left us burned on a key initiative? How many times must we be kneecapped by corporate greed, accused of heresy for arguing for (or against) things like open source? Certainly too many of us have been victimized by the denizens of fraud, vendor violence, and tech-pro treachery.
Off the Record submissions

Thankfully, as in Dante’s poetic universe, there are ways to escape the nine circles of IT hell. But IT pro beware: You may have to face your own devils to do it.

Shall we descend?
1st circle of IT hell: Limbo
Description: A pitiful morass where nothing ever gets done and change is impossible
People you meet there:Users stranded by vendors, departments shackled by software lock-in, organizations held hostage by wayward developers

There are many ways to fall into IT Limbo: When problems arise and the vendors start pointing fingers at each other; when you’re locked into crappy software with no relief in sight; when your programmers leave you stranded with nothing to do but start over from scratch.

You know you’re in Limbo when “the software guys are saying the problem is in hardware and the hardware guys are saying the problem is in software,” says Dermot Williams, managing director of Threatscape, an IT security firm based in Dublin, Ireland. “Spend eternity in this circle and you will find that, yes, it is possible for nobody to be at fault and everyone to be at fault at the same time.”

A similar thing happens when apps vendors blame the OS, and OS vendors blame the apps guys, says Bill Roth, executive vice president at data management firm LogLogic. “Oracle says it’s Red Hat’s fault, while Red Hat blames Oracle,” he says. “It’s just bad IT support on both sides.”

Michael Kaiser-Nyman, CEO of Impact Dialing, maker of autodialing software, says he used to work for a nonprofit that was locked into a donor management platform from hell.

“The software took forever to run, it only worked on Internet Explorer, it crashed several times a day, and was horribly difficult to use,” he says. “The only thing worse than using it was knowing that, just before I joined the organization, they had signed a five-year licensing agreement for the software. I wanted to kill whoever had signed it.”

Organizations also find themselves in Limbo when their developers fail to adopt standard methodologies or document their procedures, says Steven A. Lowe, CEO of Innovator LLC, a consulting and custom software development firm.

“Every project is an ordeal because they’ve made it nearly impossible to learn from experience and grow more efficient,” he says. “They spend most of their time running around in circles, tripping over deadlines, yelling at each other, and cursing their tools.”

How to escape: “When you’re digging a hole in hell, the first thing to do is stop digging and climb your way out,” says Roth. That means making sure you have the tech expertise in house to solve your own problems, going with open source to avoid vendor lock-in, and taking the time to refactor your code so you can be more efficient the next time around.

2nd circle of IT hell: Tech lust
Description: A deep cavern filled with mountains of discarded gadgets, with Golem-like creatures scrambling to reach the shiny new ones at the top
People you meet there: Just about everybody at some point

The circle of tech lust touches virtually every area of an organization. Developers who abandon serviceable tools in favor of the latest and greatest without first taking the time to understand these new frameworks and methodologies (like node.js or Scrum), thereby preventing anything from ever getting done. Managers who want hot new gizmos (like the iPad) and invent a reason why they must have them, regardless of the impact on the IT organization. Executives who become fixated on concepts they barely understand (like the cloud) and throw all of an organization’s resources behind it in the fear of falling behind the competition.

“In reality, we all visit the circle of lust now and then,” says Lowe. “The problem with tech lust is the accumulation of things. You can get so mired in ‘we can’t finish this project because a new tool just came out and we’re starting all over with it’ that nothing ever gets done.”

How to escape: It is difficult to break free from the circle of tech lust, admits Lowe. “We all love shiny new things,” he says. “But you have to know what’s good enough to get the job done, and learn how to be happy with what you have.”

3rd circle of IT hell: Stakeholder gluttony
Description: A fetid quagmire filled with insatiable business users who demand more and more features, no matter the cost
People you meet there: Demons from sales and marketing, finance, and administration

This circle is painfully familiar to anyone who’s ever attempted to develop a business application, says Threatscape’s Dermot Williams.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

According to leaked screenshots and secret sources, Microsoft will scrap ‘Metro’ and roll boot-to-desktop as the default in the Windows 8.1 update coming in March.

If you hated the Live Tiles presented as the default on the Windows 8.x Start screen, then Microsoft allowed users to tweak the setting in Windows 8.1 to bypass the “Metro” interface at boot and instead boot to desktop. But boot-to-desktop will be the default, according to leaks from Microsoft insiders and screenshots of the upcoming Windows 8.1 update. Rumor has it that the update will roll out on Patch Tuesday in March.

The Russian site Wzor first posted leaked Windows 8.1 test build screenshots showing the change enabled by default.

Leaked Windows 8.1 test build, no more Metro Start screen, boot to desktop as default
Then Microsoft insiders, or “sources familiar with Microsoft’s plans,” told The Verge that Microsoft hopes to appease desktop users by bypassing the Start screen by default, meaning users will automatically boot straight to desktop. “Additional changes include shutdown and search buttons on the Start Screen, the ability to pin Windows 8-style (“Metro”) apps on the desktop task bar, and a new bar at the top of Metro apps to allow users to minimize, close, and snap apps.”

Of course, Microsoft continues to lose millions upon millions of customers to iOS and Android. That desperation is likely what drove Microsoft to force a touch-centric operating system on customers. If customers can’t easily use a Windows OS on a traditional desktop, then Microsoft hoped its “make-them-eat-Metro” strategy would force people to buy its tablet to deal with the touch-based OS. For Microsoft, it was like killing two birds with one stone. But despite the company’s “One Microsoft” vision, we’re not birds and we don’t like having stones thrown our way.

Microsoft claimed that telemetry data justified the removal of the Start button in Windows 8, and then its return in Windows 8.1. That same telemetry data shows “the majority of Windows 8 users still use a keyboard and mouse and desktop applications.” The Verge added, “Microsoft may have wanted to push touch computing to the masses in Windows 8, but the reality is that users have voiced clear concerns over the interface on desktop PCs.”

“Microsoft really dug a big hole for themselves,” Gartner’s David Smith told Gregg Keizer, referring to the Redmond giant’s approach with Windows 8. “They have to dig themselves out of that hole, including making some fundamental changes to Windows 8. They need to accelerate that and come up with another path [for Windows].”

Back in December, NetMarketShare stats showed that more people were still using the hated Windows Vista than Windows 8.1. January 2014 stats showed Windows 8.1 on 3.95% of desktops with Vista on 3.3%. Despite Microsoft warning about the evils of clinging to XP, and the April death of XP support, Windows XP, however, was still on 29.23%. Many people still hate Windows 8, which may be why the company plans to jump to the next OS as soon as possible.

Microsoft plans to start building hype for “Windows 9″ at the BUILD developers’ conference in April. The new OS is supposedly set to come out in the second quarter of 2015. While it seems wise for the company to want to ditch the hated Windows 8.x as soon as possible, Microsoft had better to do something to encourage developers as the expected boot-to-desktop change will mean folks won’t see the Metro apps on the Start screen.

Windows 8.1 update leaked screenshot of test build
According to the test build screenshot, Microsoft is urging people to “switch to a Microsoft account on this PC. Many apps and services (like the one shown for calendar) rely on a Microsoft account to sync content and settings across devices.” Note that “sign into each app separately instead” is “not recommended” by Microsoft. Of course, setting up a Windows 8 computer without it being tied to a Microsoft email account was “not recommended” either…but it can be done with about any email address or set up as a local account tied to no email address. If you use SkyDrive, aka the newly dubbed “OneDrive,” then why not just log in when you need it?

Trying to keep its developers “happy,” may be part of the reason Microsoft does not recommend signing into your Microsoft account on an individual app basis. Sure there’s still the Windows Phone Store, but some people complain that the Windows Phone Store is full of junk and fake apps. Of course, since Windows 8′s dueling tablet-PC interface was a flop, perhaps Microsoft will follow Apple’s lead and come up with a separate OS for tablets. That move might help out Microsoft and developers; without developers, there’s no apps. Without good apps, even a new OS for tablets won’t help Microsoft from continuing to decline and falling into the abyss of irrelevancy.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com