Subscribe

Archive | Technology

Investigating Internet Crimes

Written by experts on the frontlines, Investigating Internet Crimes provides seasoned and new investigators with the background and tools they need to investigate crime occurring in the online world. This invaluable guide provides step-by-step instructions for investigating Internet crimes, including locating, interpreting, understanding, collecting, and documenting online electronic evidence to benefit investigations.

investigating_internet_crimesThis year I served as technical editor for this excellent book by Todd Shipley and Art Bowker. Cybercrime is the fastest growing area of crime as more criminals seek to exploit the speed, convenience and anonymity that the Internet provides to commit a diverse range of criminal activities. Today’s online crime includes attacks against computer data and systems, identity theft, distribution of child pornography, penetration of online financial services, using social networks to commit crimes, and the deployment of viruses, botnets, and email scams such as phishing. Symantec’s 2012 Norton Cybercrime Report stated that the world spent an estimated $110 billion to combat cybercrime, an average of nearly $200 per victim.

Law enforcement agencies and corporate security officers around the world with the responsibility for enforcing, investigating and prosecuting cybercrime are overwhelmed, not only by the sheer number of crimes being committed but by a lack of adequate training material. This book provides that fundamental knowledge, including how to properly collect and document online evidence, trace IP addresses, and work undercover.

  • Provides step-by-step instructions on how to investigate crimes online
  • Covers how new software tools can assist in online investigations
  • Discusses how to track down, interpret, and understand online electronic evidence to benefit investigations
  • Details guidelines for collecting and documenting online evidence that can be presented in court

Blackhatonomics: An Inside Look at the Economics of Cybercrime

blackhatonomicsBlackhatonomics: An Inside Look at the Economics of Cybercrime explains the basic economic truths of the underworld of hacking, and why people around the world devote tremendous resources to developing and implementing malware.

The book provides an economic view of the evolving business of cybercrime, showing the methods and motivations behind organized cybercrime attacks, and the changing tendencies towards cyber-warfare.

Written by an exceptional author team of Will Gragido, Daniel J Molina, John Pirc and Nick Selby,  Blackhatonomics takes practical academic principles and backs them up with use cases and extensive interviews, placing you right into the mindset of the cyber criminal.

The Russian Software Pirates

Every day here and in dozens of other Russian cities, pirate dealers sell copies of the world’s most popular software titles at $5 per CD-ROM.

Despite fears about the economy, small and medium-sized businesses are flourishing in this elegant northwestern Russian city – and pirated software is installed on almost all of their computers.

Nearly all high-end computer games, Encyclopaedia Britannicas and other educational and reference CDs are distributed through illegal sources.Bootlegged software use is certainly not limited to Russia. Industry analysts say that 27 percent of the software running on American computers is pirated.

And the Business Software Alliance, which monitors business software piracy, says 43 percent of PC business applications installed in Western Europe are illegal copies.

In Russia, however, the piracy rates are a stunning 91 percent for business applications and 93 percent for entertainment software, according to Eric Schwartz, counsel to the International Intellectual Property Association, a Washington, D.C.-based organization that lobbies internationally on behalf of the copyright industry.

Schwartz said that piracy in Russia costs American entertainment software manufacturers $223 million a year and business software makers almost $300 million. The Business Software Alliance estimates worldwide revenue losses to the software industry from piracy at $11.4 billion.

Under the 1992 agreement with the United States that guaranteed Most Favored Nation trading status, Russia is required to effectively enforce anti-piracy laws, but actual enforcement is virtually nonexistent.

Meeting the Dealers
The dealers, who operate in stalls and kiosks around major transportation hubs or in full-scale markets usually 15 minutes from the city center, offer an enormous range of titles, usually bundled in a form their manufacturers would never dream of.

“That’s Windows 98, Front Page 98, Outlook 98, MS Office 97 SR1 and, uh, yeah, Adobe 5.0,” said Pyotr R., a student at St. Petersburg Technical University, of a single CD-ROM. “On the disk there are files, like ‘crack’or ‘serial’ or something, and that’s where you’ll find the CD keys,” he said, referring to the codes that unlock CD-ROMs and allow users to install the programs.

Pyotr (who spoke, as did all others interviewed for this article, on condition of anonymity) sold that disk, plus a second one containing Lotus Organizer 97, several anti-virus programs and some DOS utilities, for 60 rubles or about $10.

Another dealer was offering Windows NT 4.0 for $5, and Back Office for $10. According to Microsoft, the recommended retail prices for these products are $1,609 and $5,599.

Many Russians, who during the days of the Soviet Union bought most necessities through black market sources, think nothing of buying their software this way. They even defend the markets as providing a commodity that had been long-denied them.

After the collapse of the Soviet Union, inexpensive computers began to flood into the country from Taiwan, Germany and the United States, increasing the importance of these illegal software markets. Spending at least $800 on a computer was an enormous investment for Russians, even relatively well-paid St Petersburgians who earn an average salary of around $350 a month. Those who did buy one were in no position to consider purchasing software legitimately, even if it were readily available, which it often wasn’t.

These days, though, legitimate outlets for hardware and software are popping up everywhere in Russia; computer magazines offer licensed versions of everything available in the United States and Western Europe, and software makers advertise in the city’s well-established English-language media.

The markets continue to thrive with an alarming degree of perceived legitimacy. Outside the Sennaya Square metro station in St. Petersburg, a police officer approached a pirate dealer (who offered, among other things, Adobe Font Folio and QuarkXPress) and angrily chastised him for not prominently displaying his license to operate the stall. When the dealer complied, the policeman moved on.

Customers feel secure that the pirated copies will work and that belief appears well-founded. Bootlegged titles come with a written guarantee – good for 15 days from the date of purchase – that they’re virus-free and fully functional.

And files on the CDs themselves boast of high-quality, code-cracking techniques: “When so many groups bring you non-working fakes, X-FORCE always gets you the Best of the Best. ACCEPT NO IMITATION!” boasts one.

“There’s a lot of viruses around in Russia,” said Dima V., a system administrator who runs several small company networks in St. Petersburg using bootlegged copies of Windows NT 4.0, “but most of the disks you buy in the markets are clean. The guys are there every day and if they give you a virus you’ll come back – it’s just easier to sell you the real thing.”

Foreigners get in on the action
Russians are not by any means the only people installing the pirated programs. While employees of multinational companies or representatives of American companies would never dream of risking their job by violating copyright laws, self-employed Westerners, or ones who have established small Russian companies have no qualms about doing so.

They also pose a question software manufacturers find difficult to answer: Who would buy a network operating system package for $5,000 when it’s available for $5?

“Nobody,” said Todd M., an American business owner in St. Petersburg, whose 24-PC network runs a host of Microsoft applications that were all bootlegged.

“There’s just no financial incentive for me to pay the kind of prices that legitimate software costs,” he said. “I mean, it would be nice to get customer service right from the source, but we have really excellent computer technicians and programmers in Russia and they can fix all the little problems that we have.”

Customer support and upgrades are just what the manufacturers point to as advantages of licensed software, even in markets like Russia.

“There are enormous incentives,” said Microsoft’s Mark Thomas, “to buying legitimate software, and they start with excellent customer support and service and upgrades. We spend $3 billion a year on research and development and the money that we make goes right back into making products better and better products. The pirates don’t make any investment in the industry.”

And local industry, Thomas pointed out, suffers disproportionately in the face of piracy.

“A huge amount of our resources are put into making sure local industry builds on our platform,” he said. “When a local company creates packages for, say, accounting firms, and somebody can come along and buy it for $5, these local companies can lose their shirts.”

Piracy getting worse
Despite heavy lobbying by industry representatives and government agencies, piracy has worsened. As CD copying technology becomes cheaper, large factories in Russia and other countries, including Bulgaria, churn out copies of software copied by increasingly sophisticated groups in countries around the world, especially in Asia.

Encyclopaedia Britannica wrote off Malaysia as a market effectively destroyed by pirates, who sold 98 out of every 100 copies of its flagship Encyclopaedia three-CD set for a fraction of its recommended retail price of $125. The same disks, which have not officially even been offered for sale in Russia, are readily available in the St. Petersburg markets for $10.

“For Encyclopaedia Britannica, the cost of piracy is millions a year,” said James Strachan, EB’s international product manager. “One hundred percent of the value of our product is an investment in the authority and depth of our content,” he said. “Piracy causes us extreme concern and we do everything we can to root it out and prosecute.”

Todd M., the businessman with the 24-PC network, offers little hope that the situation will soon change in favor of manufacturers.

“With all the problems I have running my business here in Russia, from armed tax police to Byzantine procedures and customs duties, software piracy just doesn’t register with me,” he said.

“It’s the one thing about doing business here that’s somebody else’s problem.”

Setting up Squid. Then Using It.

So I’m here in California, staying at a hotel for ten days and want to look at some websites. Nothing too fancy, some blog entries, research for upcoming reports, maybe some racy stuff like No-Load Mutual Funds. And I am, of course, like you, on a free, open, wireless connection. My mail is tunneled, but web isn’t. So I realize I should set up a proxy server somewhere else, like at home, and tunnel into it, lest anyone sniffing on the local WiFi LAN (not that I’d ever do something like that with something like Wireshark) or indeed the hotel’s wireless contractor have a record of everything I look at and type.

No brainer: Squid’s your man. But in looking around for a plain-English How-To set up Squid and then use it guide, I couldn’t find one. So here it is. The first thing to do then, is install and configure Squid.

Setting Up Squid
Because I am on Gentoo, this was easy:

emerge —sync
emerge squid

I bet on Debian it’s as simple as

sudo apt-get update
sudo apt-get install squid

Once Squid was installed, I saved the original /etc/squid/squid.conf file as a backup and then made this my new one:


http_port 3128
cache_mem 50 MB
visible_hostname DOSA
cache_dir ufs /var/cache/squid 500 16 256
offline_mode off
maximum_object_size 102400 KB
reload_into_ims off
pipeline_prefetch on
acl my_network src 192.168.0.0/255.255.255.0
acl localhost src 127.0.0.1/255.255.255.255
acl all src 0.0.0.0/0.0.0.0
http_access allow my_network
http_access allow localhost
http_access deny all

Then save, and restart squid (or start it)

/etc/init.d/squid start

That, you’ll see, allows traffic from both local network and localhost but not to anyone else. We’re accessing via SSH, so once we’re tunneled in, we are on localhost.

I only allow access to the box via one port for SSH, and that is not a standard port. This little security-by-obscurity kludge was not for defense against hackers, but only to stop the bots from constantly knocking on my door and filling up my auth.log with automated login attempts. Those attempts were useless anyway because of the actual (non-obscurity-related) security measure: I don’t allow remote login with a keyboard password – you need a pre-saved key. Also, of course, the firewall does not accept connections to the Squid port – to get at that port, you need to SSH in and do port forwarding. (If anyone can tell me a better, safer way to do this I’d be obliged.)

There’s some other authentication stuff one could do quite easily (forcing a user prompt in the browser when users start a new session to authenticate to Squid via pam) that I feel comfortable ignoring in this case because I’m fairly confident about the physical access to the network (if that’s breached I have bigger problems) and also because web access to the box is limited as I’ve just described.

Setting Up The Tunnel
Now the trick is to get my machine here in the hotel to talk to Squid across an encrypted SSH tunnel lest I send my blog password and evidence of my looking at IBEX 35 stocks to everyone in my 17-floor hotel. This machine is an Ubuntu box, so I set up a simple SSH tunnel with port forwarding – using the technique first rattled off to me by Ian Sacklow, head of the Capital District Linux Users Group, while we were standing in a Barnes & Noble store about three years ago:

sudo ssh -L 3128:127.0.0.1:3128 user@your-server.com -p 7890 -f -N

The -L means bind the local port (given first) to the remote port (given last) of the server (given in the middle, wrapped between ::s). Put in a mnemonic way,

SSH BIND MY_PORT_HERE:server:THEIR_PORT_THERE.

By that standard, I’m binding port 3128 of my local machine (127.0.0.1) to port 3128 of the remote machine. Then I specify the remote machine with the user@your-server.com and specific port command (if that is required).

The -f sends the SSH shell to the background – but brings it back if the SSH server prompts for a password or sends something else back. The -N (in SSH2 only) says, “And while you’re in the background, don’t execute any remote commands,” or in this case, “Just set up the tunnel and make yerself scarce.”

If you’ve timed out a sudo session or if you have just opened the terminal, you’ll be first prompted for your user password to carry out the sudo part of the command. Once that’s done, if your SSH server allows you to use keyboard interactive logins and you don’t have a remote key, then you’ll be prompted for the password of the user name on the remote server. Enter it and if accepted, you should just return to a local user prompt. Same if you have an SSH key – after running the tunnel command, you’ll just be returned to a local user prompt.

Tip: If you set up the tunnel and you get a message saying that the local port is already in use, find out what’s using it: in this case, you’d run:

sudo lsof -i tcp:3128

That should get you info about what’s running. Kill it and then start the tunnel again. Unless you decide that you don’t want to kill it, in which case, you’d change the local bind to a different port. It doesn’t matter a whit to either SSH or Squid.

Firefox settings screenshotSetting up Firefox
Now things are easy. You’ve got Squid running on the remote server. You’ve got an SSH tunnel connecting you to it. Now just tell Firefox where to look. In Firefox select Edit -> Preferences -> Network -> Connection Settings. Tick the radio button marked, ‘Manual Proxy Configuration’, type E’127.0.0.1’ in the HTTP Proxy box and ‘3128’ (or whatever) into the Port box, click OK, then Close. Now type http://www.google.com into your URL bar and see what happens. With luck, you’ll get taken to Google.

To make sure you’re actually using the proxy, SSH into the Squid server box, and look at the tail of /var/log/squid/store.log. You should see something about google. To watch it change in near real time, do:

tail -f /var/log/squid/store.log

And surf to another location.

Summary
Web surfing in hotels or coffee shops is nasty stuff. This is one way to add a modicum of privacy to your activities at no cost but your time. Of course, corporate firewall requirements might require some modification to the tunnel commands to get out to your server, but shouldn’t present too much trouble. But beware – an entire industry exists which seeks to discover people engaged in just that kind of activity, which is certainly against your corporate access policy.

Happy Days In The Bird Business

With political intrigue, high finance and customers in exotic locales, it’s not a boring time to be in the “bird” business.

Despite a global communications sector slowdown and tepid business climate, revenues of satellite services companies grew 7 percent to $49.8 billion in 2002, according to the Satellite Industry Association.

But it’s not the best of times, either – that 7 percent growth pales in comparison to the 17 percent jump in 2001.

Operators, though, are investing heavily in new technologies and standardization despite a string of bankruptcies over the past decade that have made financing more difficult.

The major players – Inmarsat, PanAmSat, New Skies Satellites, Loral Skynet and Eutelsat – operate constellations of satellites, providing a broad range of services and technologies.

As new satellite-based network technology emerges and the industry enters a transitional phase, these companies must foster new growth while maintaining their core customer base.

The current generation of satellites are essentially “bent pipes,” in that they take a signal from a specific location on the ground and broadcast it back towards earth to an area significantly larger.

But the next generation of birds promises very high-speed Internet connections and on-board, high-speed digital signal processing; satellites will effectively become Internet routers-in-the-sky.

While pay television has been a runaway consumer success story for the satellite industry, efforts to sell satellite phones before their time soured the concept of satellite phone and Internet services in the minds of consumers (and the investment community).

Yet behind this image is a steady and sizable business.

The core customers are the military, governments and corporations seeking secure, remote access to data; broadcast television, and telephony. Satellite services are particularly good at providing communications between central offices and remote locations in industries like natural-resource exploration, finance, manufacturing and transportation.

By far the largest commercial customers are military; Gartner DataQuest estimates that the U.S. Department of Defense alone spends $300 million annually on satellite services. Satellite communications, says Patti Reali, an analyst at Gartner, are a significant part of U.S. military tactics. In turn, the military is driving the requirements for the next generation of satellites, she said.

Satellite companies are hoping their new capabilities will appeal not just to governments but to businesses small and large seeking the ability to send and receive data from anywhere.

Inmarsat, the world’s first global satellite communications operator, is the best-known name in the small circle of satellite consortia that operate in this business. Established by the United Nations in 1979, the International Maritime Satellite Organization was privatized in 1999 and has 86 government and corporate shareholders. After retreating from plans for an initial stock sale because of unfavorable market conditions, Inmarsat is now seeking a private sale.

It is the object of a bidding war between a pair of British private equity firms, Apax Partners and Permira, and a U.S.-based consortium comprising Soros Private Equity and Apollo Management.

The bidders are attracted by Inmarsat’s maritime customer base and military contracts. Inmarsat now also serves mobile business Internet users with its Regional Broadband Global Area Network product, known as R-BGAN, a portable satellite modem providing relatively high-speed Internet access from a portable unit that resembles a notebook-computer.

“BGAN is not for everybody,” said Paul Griffith, vice president for portfolio development at Inmarsat. “It’s not consumer-oriented, but it will be very attractive to businesses” beyond the traditional oil and gas, humanitarian aid groups and media markets.

If satellite companies hope to appeal to more mainstream groups, they will have to make their offerings simpler, cheaper and more universal.

A newly-adopted standard, digital video broadband return channel over satellite, or DVB-RCS, may foster competition and help reduce satellite broadband Internet charges, allowing satellite companies to compete with terrestrial options such as digital subscriber line, or DSL; cable; and fiber optics.

Satellite companies are also appealing to developing countries, who find that the cost of building a traditional, earthbound communications network is sometimes more expensive than a wireless solution.

Even the developed world has areas in which satellite technologies can complement fiber and cable networks. Satellite can more cheaply bring broadband to remote regions in countries like Spain, France and even Germany and the Netherlands to help those countries comply with an EU initiative called eEurope 2005, which mandates that all public schools, administrations and hospitals have broadband capabilities by 2005.

Similar government initiatives in Canada and Mexico have caught the eye of satellite services companies.

“We also continue to see strong demand in the Middle East and Asia,” said Diederik Kelder, senior director of business planning for New Skies, “and there is a lot of activity in Western Africa as well.” New Skies, which provides infrastructure to Internet service providers operating in remote areas, has seen heavy interest in data services in and around Iraq as that country’s reconstruction ramps up.

Late this year, the Spaceway unit of Hughes Network Systems will employ high-performance Ka-band satellites, which combine sophisticated onboard digital processing with advanced transmission capabilities. Hughes hopes to upgrade customers like banks, credit card companies and other businesses, which need terminals to communicate with a central location, to this next-generation of satellite. They will simultaneously try to expand into the small-and medium-size business market with services such as fast Internet access.

As for satellite companies offering broadband services to consumers, analysts are skeptical that a case can be made. The few initiatives which have popped up – notably including Star Band and DirecPC in the United States – have been commercial failures. Lars Godell, a senior analyst at Forrester Research, places satellite services firmly in the “other” category when discussing the European broadband market.

“Consumer selection of broadband comes down to price, price and price,” he said, “most satellite solutions require either a large subsidy from the service provider or, less likely, from the consumer – just remember how cheap DSL and cable modems are these days.”

WiFi encryption standards

There are three commonly-used standards of Wi-Fi AP security in the world today. The best known, Wired Equivalent Privacy (WEP), is readily vulnerable to exploits and must not be trusted except for the flimsiest of protection. WEP is widely considered to be a trivial barrier to even barely competent hackers, and to afford only a bare minimum of protection on its own.

Wi-Fi Protected Access (WPA) was developed as an intermediate solution to the revelation that WEP’s encryption had been highly compromised. The second generation of WPA security is called WPA2, and this is the current state of the art. WPA2 delivers (to date) very good encryption and protection against eavesdropping. WPA2 Personal provides strong encryption and uses Temporal Key Integrity Protocol (TKIP), which dynamically encrypts the key used for authentication. WPA2 Enterprise uses an authentication server to authenticate users.

Until recently, implementing WPA and WPA2 was something of a hassle; if you’ve been wireless for some time now, and still have Wireless B Cards (see sidebar), you’ll have challenges using WPA. If you have fairly new equipment, such as an Intel Centrino notebook, you’ll be able to use at least WPA if not WPA2.


 

Also in this series…
A proposal for Reasonable Wireless Security for law firms

A sample network access policy

Wifi encryption standards

“There’s nothing on my desk worth stealing”

…and free hotspots for all

 


ASPs Heat To Red Hot

In the midst of early April’s [2000] major tech sell-off, a company called Update.com Software AG had an oversubscribed and successful IPO on the Neuer Markt, and analysts say that a major reason for the successful launch was that the Update is an early mover as an Application Service Provider, or ASP.

To many analysts, ASPs are simply revolutionary. “As a general trend,” says Charles Homs, Senior Analyst at Forrester Research, “ASPs will substantially change the overall facility delivery in the e-business market.”

The European ASP sector has gotten off to a slower start than in the USA, but it is definitely heating up, and a sector worth watching. And Europe-based ASP companies have a decided home-court advantage over American imports: industry movers like the Finnish Sonera, German Infomatec and England’s Netstore, and even giants such as SAP, understand that American solutions don’t work well out-of-the-box here.

ASPs offer businesses of all sizes offsite tools to store, retrieve and use information. When broadband hits Europe – in six to nine months for Scandinavia, and a year to a year and a half in most other countries – the spread of ASP is expected to catch on like wildfire.

Netstore, with 750,000 customers and a market cap of €895 million, and the Infomatec (IFO NM) offer customers centrally stored applications, such as spreadsheets and inventory control ystems, as well as providing storage for data – effectively allowing companies to outsource all their IT needs.

Sonera (SOY GR) focuses mainly on web-based transaction and delivery products, such as allowing sites to offer streaming media and other functions.

The road to European ASP market is fraught with problems that locals have a better time identifying. For example, said Homs, a company providing Customer Relationship Management software to a company in Germany is required by German law to physically maintain the server within the German borders, to comply with German data security laws.

But IBM, too, has been an early mover in the European ASP space, and have broad experience in implementation in Europe. For the past two years they’ve been actively developing ASP products.

What’s An ASP?
ASPs are large, ultra-reliable, high-capacity and high-speed servers that store not just a company’s databases and information, but also the applications that manipulate the data. Whereas companies now invest in traditional “fat clients” – the typical computer/operating system combination wherein applications are run and data stored – many companies in the US and Europe already employ a new system.

Using a fast wired or wireless internet connection and a “thin client” – a desktop, notebook or even palm-top device running just a simple operating system and a web browser – users can now download the shell of, say, a package tracking or inventory system, call up data they need, modify it and store the results, using only software stored on the ASP’s server.

For small to medium size enterprises, ASPs could be most valuable, allowing them to maintain one copy, not thousands, of a program, and administer it centrally.

“This is a really interesting sector, because these ASPs can really help small to mid-sized firms save lots of money,” said Peter Klostermeyer, analyst at VMR. “Today’s software applications are not as expensive as they used to be but the beauty of ASPs is that they bring down the costs of implementation and administration of systems”

A not-so-subtle differentiation in the ASP sector is between hosting and true ASP, and the most important question is the who is actually implementing the solutions. Some ASPs simply give you the platform and allow a client to load whatever you choose, while others set up all the infrastructure, running everything on the server. They often limit the flexibility to what can be customized.

But to small- and medium-sized enterprises, this second option brings a new world of computing power for far less than doing it yourself.

“These are the companies that stand to gain the most in the short term,” said Homs, ” because companies are finding it increasingly difficult to find the people to set up their systems and web systems – and it’s also very expensive. But this allows them to share the costs with other companies. I think this type of ASP is a very lucrative solution.”

There’s Money In The Middle

In 1997, when WAP was unveiled to the world, the proposed information flow chain neatly stated that content would be provided in wireless markup language (WML), converted to binary WML, sloshed through a WAP Gateway, blown out on cellular networks like GSM, and finally sucked into and displayed on mobile telephone handsets.

Customers who were even able to get the first WAP phones (many models were late in rollout) complained bitterly of slow speeds, caused not just by the service but also by the devices themselves. The over-hyping of WAP, especially in Q1 2000 and Q2 2000, and subsequent disappointing offerings nearly put the nail in WAP’s coffin, from a marketing standpoint.

More significant than the slowness, however, is the fact that with the wireless Internet there are heaps of different devices to format for, and WAP-oriented content providers have the not insignificant task of managing two content formats, one in HTML and one in WML.

Problems aside, WAP probably isn’t going anywhere, at least for the next few years, simply because of device penetration: millions of WAP handsets are already in the hands of users, and new GPRS (general packet radio system) or 3G-enabled terminals will need time to run their product lifecycle from early adopter high-fallutin’ business people, through to the kids in the discos to, well, my mother.

New solutions So as mobile data delivery moves from phone handsets to “terminals”, competing browser protocols and devices will come and go in the coming years. Getting content to all the different devices is still the challenge and there are lots of ways to do it.

Take a straight “delivery system” such as AvantGo, which is purely infrastructure: companies use it to extend their content or applications to a mobile device, by compressing image size and format and optimizing layout for the device requesting the information. It also manages offline versus online content, letting devices with always-on connections browse at will but caching entire sites locally for people with dial-up connections.

That’s a straight compression solution and many in the industry say that “trans-coding” (conversion) of one form or another will be the way to go in the future. Because legacy content isn’t just HTML (it’s often in the form of Word, Quark XPress, flatfiles and PDFs) software that trancodes or converts from old formats to new ones is hot these days, with dozens of startups saying they can do it better than anyone else. Those companies will undoubtedly get shaken out, and some clear winners will emerge in the next year or so. More interesting than them, however, is the coding method and the process used.

As we have seen, the darling of the “do-it-all-code” pack has been XML (extensible markup language). While HTML, the markup language of the Internet, allows control over the appearance of content, such as for bold (the command for a bold typeface), XML allows markup that describes the content itself, such as Le Grove.

The beauty of XML, and XLST, the stylesheets that control how XML can be presented on a page, is that they create a single source of uniformly-formatted data from existing content, which can in turn be squeezed out into whatever flavor you want – HTML, WML, nML and so on.

A new data chain So the new chain of data goes from legacy content to content conversion; to the generic, XML-ized content; to a content gateway, which takes the XML and converts it to both device and code-specific content based on the type of device requesting the data; to the protocol gateway, which negotiates multitudinous device protocols such as WAP, and iMode; to the network and finally to the wireless devices.

You could see how this type of thing would be of compelling interest to Roger Barnes, a consultant for the Rough Guides series of travel guidebooks, which sits on a heap of content in QuarkXPress.

Barnes was approached by AuthorOnce, a company that claimed that they could “actually do it now: take our content, put it through a GUI, and put it out to any platform we wanted,” says Barnes. As we went to press, Barnes had seen and been impressed with a small demo, the success of which had led him to schedule a meeting in New York with the AuthorOnce team and Rough Guides’ senior management.

AuthorOnce is one of several companies offering what may be looked upon as complete middleware solutions – from one end of the chain to the other, and then back again. The company, which has received friends and family backing to the tune of $750,000 and is currently fishing for a first round of funding, claims that what sets it apart from companies like AvantGo and Everypath, is its method of getting data from the legacy system into XML in the first place.

“We’ve got travel books, but we’ve also got guides to music,” says Barnes, “Converting text to XML is one thing, but we’ve got pictures, maps, headlines. The company’s “rule engine” system learns about the way we publish our books every time we work on one. So preparing the new Rough Guide to New York, it knows what we did last time.”

That’s a different added value from offerings from other companies, like AvantGo and Everypath, that simply take content, pull it up into XML, and send it out to a Web or WAP interface. Those companies say that their products are perhaps the most effective way of getting legacy information out to a world of different device formats.

AuthorOnce might disagree, saying that the hardest part of the chain isn’t delivery to the devices, it’s XML-ing it in the first place, and doing it in a way that allows you to control the flow of data and create rules for future conversions of like-formatted but different texts.

Taking one end of the chain
“Well, if you’re in the business of from n to XML, of course you want to view this as the problem,” said Rikard Kjellberg, CEO of Ellipsus Systems, a company in Stockholm that provides the Protocol and Content Gateways. “There are lots of excellent tools that offer the mechanics of going from the database to XML – I’d bet even Oracle would have tools for that.”

Kjellberg’s Ellipsus concentrates on what happens after the content is in XML, and how to best transmit the data to the jungle of devices out there. Its Sargasso Mobile Internet Server gives an open software platform that lets legacy content connect, through any IP bearer (CSD, GPRS, etc) to client devices. It consists of a pull and a push proxy gateway, a directory interface, a manager interface, a security pack and a “gatekeeper” firewall, allowing access control for the Web as well as RMI, CORBA, SOAP and other objects.

That is the unique selling point; Ellipsus allows developers to introduce CORBA (and, for example, Enterprise Java Beans) all the way to the device, letting them make a more dynamic interface to legacy systems than would be available with traditional HTTP.

What it’s doing is creating a virtual thin client within the Ellipsus system, which end users access via nML from their phones. The phone doesn’t need to support CORBA, it just needs to communicate with Ellipsus, from where the object communicates with the legacy content or application. The menu the user sees on the phone doesn’t change, it’s just got a different back end: where a menu would have behind it a URL, like , the object-access menu has an address like .

Ignoring the problem
And then there are those who would ignore the problem completely, saying that they’re focusing on the problems created by having multiple systems in the first place. Companies such as mi4e, a Stockholm-based company that makes a plug-in for web servers that acts as a WAP protocol gateway on existing Microsoft IIS or Apache webservers.

There are also service developers, like France-based Selfswitch or Stockholm-based Expedio, which is producing unified messaging systems that let operators offer customers one central repository from which they can stay connected to voicemail, email, faxes, and a synchronized schedule; or Port42, which makes application portfolio packages that operators can buy in bulk, branding entire suites of applications to offer their customers instant application packages.

Similarly, there is Stockholm-based ZoomOn, which designs and implements vector-based graphics (VBG), and operates on the assumption that WAP – which does not support VBG – isn’t here to stay.

These companies are in effect saying that it’s too early to dedicate a company to bringing content to users via existing platforms or procedures, but that when the platform is agreed upon, they’ll be there selling the stuff that will make people want to burn up those airtime minutes.

In fact, unified messager Xpedio is going one step further, developing a platform for that time, about three years from now, when Britney Spears or whoever is then Britney Spears decides to become a “Virtual Operator.” Britney’s going to give away a SIM card with every CD that lets her teeny-bopping buyers get 10 minutes of phone time, 300 SMS messages and a Kiss Britney game.

“The platform they’re working on lets you, say, if you’re a U2 fan get a U2 subscription whether you’re in Ireland or Sweden,” said Port42’s CEO, Johan Rosenlind. “That’s a great idea but it’s still a couple of years away.”

Not so fast; they, and all other platform vendors will confront significant resistance in the form of iPlanet (the Sun/Netscape alliance), Oracle (Portal-to-go, ASWE9i), the Icelandic entry WAPalizer and Microsoft MIS. Basically, all these tools do much the same thing. How Xpedio will stand up in a fight against the portal-mongers is left to be seen.

Nokia: Let ’em Make Cake…

The future is wireless, or at least that is what Nokia, Ericsson and a host of startups and network operators are earnestly hoping. But the quick success of 3G – The Third Generation of mobile telephony – is more than profitable icing for these companies; it has now become a matter of survival….

This article, which ran in the February, 2001 issue of Tornado Insider magazine, looks at the overall climate in European development of 3G, and then explores how each of Europe’s largest telecom networking manufacturers, Ericsson and Nokia, is coping with the challenge.

…………………………………………………….

In the main lobby of Nokia House, a wood-steel-and-glass curiosity in the Finnish city of Espoo, is an impromptu cell-phone museum. In it, alongside all the sexy phones that rocketed the Finnish manufacturer to No. 1 in handsets, we viewed the suitcase-sized Mobira Talkman “portable” cell phone all the yuppies were buying in 1986.

For all its prescience in handset design and user habits, no one knows better than Nokia how difficult it is to predict future trends. Painfully aware of industry missteps earlier in the 1990s, Nokia discusses 3G with a reverence, and makes predictions about “classes” of applications and “styles” of usage.

But it won’t predict specific applications that will emerge as killers. The logic is cunning: Predict the next SMS? Thanks, no. But it will hot-house every person with an idea for an application, maintain an open API (application program interface), provide technical details to everyone, and market support to the successful few. Now there’s a situation in which Nokia doesn’t care what the next big thing is, because in theory at least, it will already have it under its wing.

“Nokia’s approach is absolutely that,” says Mika Koskinen, CEO of Entirem, which develops a secure wireless transaction platform for banks and portals in cooperation with Nokia. “The general impression is: focus very strongly on our core business, and don’t then get too heavily involved with third parties, at least at the early stages.”

For startups, this is great news. “It’s impossible for us to define the killer apps,” says Mikko Pyykka, Nokia’s 3G application marketing manager, “so we just need to cooperate with a large number of developers. But we think that at the end of the day the real killers will be somehow related to messaging.”

Messaging is what Nokia says will be the main application when it comes to revenues for 3G, and it, like Ericsson, farms its developer community for the latest and greatest. But unlike Ericsson, Nokia seems to be cut-and-dried about the application developers in the overall food chain – they’re valued, to be sure, but they’re outsiders.

During the Sydney Olympics, MobileChannel.Network set up a one-way messaging system for Nokia to provide sport content services. MC.N now cooperates with Nokia on a number of levels. “We did some basic business studies with them,” says MC.N’s Janne Makinen.

“It’s kind of a two-way development cycle. They give us technical access and early releases of new products for us to develop new versions of our product, and we give them feedback of how things work. They also give us a live environment in which to test our solutions before we go to operators.” Makinen says he believes that Nokia’s involvement will play a significant role in MC.N’s development as a company, creating a need for MC.N products by delivering Nokia’s goal of a complete end-to-end system, from network to handsets to applications.

Another Nokia developer is Genimap (formerly Karttakeskus), which began cooperation with Nokia in 1996 when it released its first Communicator – Nokia’s hinged-brick PDA, aimed at the high-flying business and “poser” crowd (that’s Nokia’s own internal market-segment term). The first mapping products were popular address finders; the user entered an address and was served a digital map.

“There are different levels of cooperation,” says Mikko Salonen, CEO of Genimap. “In the beginning, it was very important for us to get cooperation on development tools – how to make WAP-based applications, and so on. Nokia gave us technical papers and documentation, but there was also a two-way exchange of technical information – we also gave our ideas to them and shared our views.”

Currently, Genimap is working with Nokia to develop location-based services such as “turn-left, turn-right” maps complete with street-level content. But Salonen stops short of saying that his company’s relationship with Nokia has changed his life. “The relationship has accelerated our business plan, and they are important for us, but I’m not willing to say this is so very extraordinary or different – they’re just a partner,” he says.

Nokia likes it that way. “In some cases, as with customer-specific services tailored and run for one operator, we might have an exclusive,” says Nokia 3G strategist Ilkka Pukkila, “but we would never go with one single application provider, for example rich SMS. Our strategy is to give operators as much choice as we can, but to offer value added – such as with the interplay between applications, like adding theater tickets to your datebook. We control the platform, and integrate the applications to suit the specific needs of the customer.”

Pukkila says very soon you and I will use our “personal trusted device,” a kind of handheld multimedia terminal to buy a Big Mac. Why would we ever buy a burger with the phone? Pukkila points out that is the wrong way of thinking. “You wouldn’t want to do it just out of the blue. But, for example, McDonald’s will have its own 3G wireless LAN in the restaurant, and when you walk in, you’d get, for example, messages about specials, which you could buy with your phone. And while you ate, you could view McDonald’s content. Or you could pre-order and pick up your paid-for meal with no waiting when you arrived,” he says.

To Nokia, it would seem, development of these kinds of applications will for the foreseeable future be left to the third parties, allowing Nokia – and McDonald’s for that matter – to pick the best of breed of every single app it buys. In fostering this climate, Nokia will likely never again have to say that the latest hot thing just never appeared on its radar.

In concentrating on the network vendors’ efforts to develop applications, we leave un-discussed, for now, the solidity of the model of UMTS as telecom savior. This could be far from certain. Industry analysts, notably those within Forrester Research, have been increasingly pointing to flaws in the revenue models the telecom operators are banking on. “There’s no killer application. There is no such thing. Let’s go for a killer environment. We don’t want to be bound by specific applications.”

This quote, from an executive at a European mobile operator, was featured in Lars Godell’s January Forrester report on the future of UMTS, which the report called “a survival question.”

“Of course it makes more sense for the equipment manufacturers to do the lion’s share of the application hot-housing,” Godell told Tornado-Insider, “But every operator must also have an open mind, to nurture niche and local applications developers as well as creating strong local partnerships.”

Forrester remains skeptical that, even with the hottest applications, incumbent European mobile operators as we know them today will survive the shakeout. Forrester predicts that by 2011, impending pressure from the network vendors to upgrade again to 4G technology will fuel a desperation to reach profits from UMTS amid an environment of saturated subscriber markets, continued spending on marketing and network upgrades and declining average revenue per user (ARPU).

Could it be that eventually the network vendors end up swapping gear for equity to bail out operators? Will forced consolidation result in the death of all independent operators and allow the survival of only the largest pan-European players?

Other analysts say that the Forrester report is overly gloomy and that it does not take into account the improved capital efficiency of investment in 3G as compared to 2G. But most agree that evolution, and indeed consolidation is not an unlikely prediction. “The shape of the mobile communications industry has not stopped evolving,” said Peter Knox, a telecom analyst at Commerzbank Securities, “and consolidation is probably a likely result.”

FUD is the Bastion of the Weak and the Shameless

Or, A FUD-Flapping Flack and her SCADA-Fear! Mongering

Were I less gracious, I would list the name and PR agency and customer. Sadly, I am gracious. I hate that I am gracious, especially since several people I know received this same papff from this flack. But seriously. This kind of stuff just has to stop. Next time? I swear, I’m naming names.

An open response to two recent emails from incompetent publicists:

On 10/02/2012 16:17, Marge wrote:

> Hi Nick,
> I see you are planning on attending RSA in San Francisco and I wanted to see
> if you have some time for a quick briefing during the show.

> The media have recently reported that hacker collective Anonymous posted
> what appears to be login details for Israeli SCADA industrial-control
> systems; including instructions on how to hack into nuclear power plants and
> water facilities.
>
> I wanted to give you the opportunity to meet with a [redacted] executive to
> discuss how critical infrastructures are utilizing SCADA software to control
> and automate machinery. [redacted] is uniquely suited to provide insights
> into how some of the largest oil & gas companies and nuclear facilities
> worldwide are protecting mission critical systems from cyber attacks.
>
> If you would like to speak with a [redacted] executive, please me know and I
> will be happy to set up a time.
>
> Best,
>
> Marge

> [redacted] PR Team

>

Hiya, Marge,

Let me get this straight: you state that a hacker collective posted what purported to be login details for Israeli SCADA systems and therefore I should learn about [redacted]?

Wayta attempt to capitalize on Anonymous, Marge!

Your measured, weasel-like wordsmithing indicates that you understand fully that no such incident actually occurred, and that you are intentionally misleading me, hoping that I read that, “media have recently reported” as proof that this happened.

Which means that you are trying to trick me into visiting your client.

Does your client understand the Fear, Uncertainty and Doubt you are spreading like so much fertilizer? Do they understand that you are baldly exploiting a totally false episode which did not result in the dissemination of any SCADA credentials, so that your client might sell SCADA security equipment?

What, there weren’t enough actual or possible recent SCADA hacking episodes to capture your imagination?

Marge. Bubaleh.

Shame on you.

 

So I sent that back to her, and the next morning I get a reply from her boss:

 

On 13/02/2012 10:52, Betty betty@flack.comwrote:

Hi Nick

It unfortunate that this made it’s way out the door on Friday. We appreciate your candor in pointing out our error.

We are pointing to the fact that this type of cyber terror is possible. It is never our mission to “fear monger” and we reported what was all over the internet in short order. We were not the original source for this story, and it is certainly our mission to make sure we fact check whenever possible, unfortunately, this went out before we had a chance to double check the new updates on this story.

We have noted this, corrected our records and removed your name from our database of bloggers.

Again, please accept our apologies for upsetting your Saturday morning.

Best,

Betty

 

Well, Betty, “it unfortunate” indeed. Your reply acts as if this was a mere fact-checking error, made in the heat of sending out a breaking story – STOP THE PRESSES! – if only you’d had TIME to tear through this with a red pen as you ordinarily do, why none of this would have happened!

The krypt3ia piece ran on 20 January. Marge’s balderdashtardly missive went out 10 Feburary.

Let’s look at Betty’s explanation once more, hey?

…and its certainly our mission to make sure we fact check whenever possible, unfortunately, this went out before we had a chance to double check the new updates on this story.

I’m glad she likes to check facts whenever possible. Even if she can’t keep her “its”es straight.

Marge, yours was the worst kind of fear-mongering. Where understanding actual attacks against SCADA systems is so important, you’re using that fictional example – in which lazy, non-fact-checking journalists re-spewed rubbish and were later humiliated for doing so – as the pretext to try and get me to meet with you?

Lady, I write a security blog and run a company that deals in response to actual security incidents. Our clients are serious people with serious issues to solve, and no time whatsoever for bullshit.

Did you hope I was some kind of uninformed, lazy, press-release-consuming, video-news-release running hack journalist who would just suck that crap down and spew it out on the other side?

Shame on you.

As a matter of fact, starting right now, I am going to do what I can to call you out for exactly what you are: the worst kind of uninformed, unctuous, disingenuous, FUD-spewing, fear-mongering, press-release-writing hack of a flack. You, Marge, are what gives security PR people a terrible name. You are what make customers afraid to listen to vendors, afraid that their consultants are lying to them, afraid that they must triple-check any statement made by someone outside their organization. This causes delays in responding to actual security incidents, which allows attackers more time to do damage, while the attacked spend cycle after cycle trying to understand from which side they’re being screwed worse, the attackers or the consultants and professionals there to help.

There are scores, if not hundreds, of public relations professionals in the world of security products who have the integrity to leverage the actual product to demonstrate how it can stand on its merits; who believe as I do:

FUD is the bastion of the weak and the shameless.

Shame on you.