Category Archives: Software

Despite SuperFish, Bundled Software Is Not All Bad

Recent news surrounding Lenovo‘s shipping the insecure adware known as SuperFish has stirred up more hate for bundled software. Yet I’d guess we have all relied upon pre-installed software and enjoyed the benefits of additional bundling. Most devices and PCs come ‘bundled’ with operating system software such as Microsoft Windows or Apple OS X. Other less controversial categories include media software (think DVD or Blue-ray codecs), games, office suites, and security tools.

Samsung is a company known for its bundle-ware. So much so that Korean courts have ordered them to allow customers to remove the software from their phones. On the other hand I’ve found some of their offerings quite useful:

  • Calendar and tasks are solid and Exchange compatible
  • Customized tray is quite convenient
  • Integrated power-saving mode has helped with battery life
  • Samsung Knox allows me to encrypt my phone without rooting
  • Timer and alarm clock apps are both solid and easy to use
  • Voice recorder is solid and advertisement free

Knox was apparently so well received that Google has integrated it into Android. My other, non-Samsung phones have also included a mix of useful, and not-so-useful bundles. A few of the best included the Swype keyboard and a handy automation feature.

Of course not all bundled apps are appreciated: Samsung’s Magazine app is not my first choice, the sketchbook is not the most obvious, and I’m not a big fan of Uber’s ride-sharing app being automatically installed with a recent update. That being said, on the whole the good ones far outweigh the others.

Bundling can have downsides besides annoyance or security bugs. When platform makers have too much power, such as a monopoly, their bundling can be anti-competitive: swallowing up whole markets. Still, from the customers perspective the lower prices, (sometimes) enhanced ease-of-use, and heavy discounts on bundled software are tough to resist. Besides gradually giving more and more money to fewer and fewer companies is a disadvantage few customers probably consider when they’re shopping.

Imagine buying a new PC and not being able to play DVD’s or music without paying extra? That was the case with the original Xbox because of the added cost to license the patents. Today most devices can playback media using common, patented technology because those licenses are already part of a bundle. Likewise most can open a PDF document, a spreadsheet, and update their own hardware drivers without extra effort because of bundled software. So next time you encounter a new device with a desktop full of icons remember there may be some treasure hiding there.

FocyOverride Gives You Control Of Browser Focus

UPDATE: Since Firefox 57 this add-on no longer works, and because of lack of interest I’ve discontinued it.

You can control the default form focus with this new, premium Firefox add-on. As a long-time KeePass user I often found myself clicking on the same user/e-mail field time and again to begin auto-typing. A few years back I made this to override the default page focus, or lack thereof. Now I’m offering this as a product so you too can take control of Firefox’s form focus.

It can also highlight focused inputs, select input content, and help with voice control by blurring focus so elements can be called out. Upgrades are free for life, and a money-back guarantee is included.

For more information see the product page at PaulRRogers.com/focyoverride.

Issue Tracking Needs Are Specialized

Different companies have different kinds of issues and ways of managing them, so I probably shouldn’t be surprised there are many software and service solutions available. And as an engineer it’s tempting to fall into one of two extremes: build it from scratch or buy something and deal with it.

Do-it-yourself was my preferred approach as a young and ambitious programmer. After all, what could be better than a custom-built solution? However, within five years of making and maintaining such solutions I found myself in the get-it-and-forget-it camp. Another five years later I think I’ve settled somewhere in the middle.

My first experience with issue-management systems was on-line support forums. These were simple message boards requiring a lot of moderation. Shortly thereafter I was exposed to a somewhat custom, Lotus-based solution. This time though, as a support technician handling the problems. Since the solution required Lotus, and only worked within the network, it seemed ripe for replacement with a web-based frontend. Studying web programming made me eager to try my hand at making something better.

Building custom intranet solutions came next with employment at a company branching out into other industries. It was interesting to see how the various needs could be met with custom software. At first simple comment systems were enough for employees to keep track of their customer complaints, notes, and follow-ups. Of course e-mail and IM were also being used heavily to supplement.

In time, however, maintaining so many different systems became burdensome for only two or three developers. Increasingly I also saw patterns in the various needs that also fit some existing solutions.

One of the first to be used in house was Trac, and it worked well for our needs initially. Integration with software repositories was nice. Some add-on’s for things like discussion boards worked well enough. The wiki, its integration, and its export options were my favorite features. At one point I even used the mantra “If it’s not in Trac it doesn’t exist” in a push to unify the disparate repositories of knowledge.

Over time though it’s simplistic reports and lack of built-in, multi-project support were too problematic. Other solutions were tried with varying levels of success and failure. One of the most lasting was Altassian JIRA and it’s wiki offering. Customization options did not appear obvious or immediately better to me at the time. Though, some in management were more familiar and appreciated it’s built-in flexibility.

Experiments along the way pushed the boundaries of such trackers; pressing them into service as the official source of truth, customer complaint tickets, feature tracking, road-mapping, to-do/task management, time tracking, and discussion. Ultimately I’ve come to realize that specialization is needed; even if it takes the form of creative use of, extension software for, or rethinking about existing solutions. And at times it is better to have many, separate solutions than keeping it all in one.

Adventures In Self-Hosting: Owncloud Review

After using it as my primary file-sync and backup service for about 10 months I’ve concluded Owncloud still requires an hour or so per month for a tech-oriented person, but it’s worth it. With enough persistence, and tolerance for some lingering quirks, Owncloud could work for you; whether or not you know how to setup a server.

Disclosures by Edward Snowden and limits to Dropbox’s free accounts have made Owncloud an increasingly tempting option. Whether Dropbox, Apple, or any other 3rd-party-cloud claims they cannot read our data the biggest players all still control the software which handles the keys. So if they get curious or a government warrant appears then they can always push an update to read whatever we have not encrypted ourselves. This didn’t particularly concern me as I’ve little to hide from the authorities. But every well-intentioned backdoor has the potential to be exploited by less trustworthy types. Dropbox’s 2G limit (16G with enough referrals) also proved limiting as my collection grew.

So around December last year I began trying to self-host Owncloud with a Raspberry Pi and Owncloud version 5. Since it lacked authentication logging I had to add it manually. Thankfully more recent versions already have this built-in. It worked well enough–if slowly at times–for the first few months.

Once my wife began using the server along with me problems started cropping up: disappearing files, logs complaining about locking, and upgrade trouble. This was around version 6.0.x, and Sqlite seemed to be the culprit as it’s not intended for multiple users. Manually migrating to Postgresql did the trick; things were back to normal. Version 7 should include official support for Sqlite-to-Postgresql migrations.

If you’re working with a lot of different files often then Owncloud feels a bit slower to sync than Dropbox; though, it’s not perfect either. Pauses or network outages also seem slightly more disruptive to Owncloud.

Another caveat is that both Dropbox and Owncloud don’t work very well with Git clones. They’re slow to sync because of the many files involved, conflicts can get messy, and file locking can get in the way. So if you’re working with projects checked out of Git then you’d be wise to keep your clones in a non-synced folder. Checking in often or backup another way should suffice. If not then Gitlab is a popular, self-hosted alternative to keeping local-only clones.

Each major release of Owncloud brings welcome new enhancements, and aside from my Sqlite issues, the upgrades have been relatively painless. I’d definitely recommend it for a business of any size or technical individuals. Perhaps some packaging tools or OS distributions can make the process easy enough for even the most basic user to get their own cloud.

Update: Owncloud’s client now makes selective sync much easier, and it has coped with my careful selection arrangement nicely. Also, there is a alternative known as Seafile which transfers multiple files more quickly than Owncloud while also offering self-hosting.

User Input Is Often Like Water, Finding All The Cracks

Makers of software and technology services must walk a delicate balance between allowing users the freedom to enter their choice of input without allowing compromise of the system. Traditionally systems have assumed users have the best of intentions; this can lead to positive emergent behavior and growth. But as more business has moved on-line so have thieves and other malicious users. It’s no wonder that malicious input is now the number one threat on OWASP’s top 10 list.

Water seeks the path of least resistance as it flows, making rivers crooked. Likewise as the volume of user input increases it also seeps into more and more areas of weakness. And as the developers of services address these weaknesses it often adds complexity that bends and contorts their systems. E-mail software has become increasingly complicated because of it how it is used, misused, and creatively adapted.

At times the data itself becomes deformed to fit within whatever bounds cannot be broken, similar to water filling a form. Twitter‘s 140 character limit has led to or expanded creative use of text including: hash-tagging (‘#’), at symbol (‘@’) nicknames, URI shortening services, etc.

Despite the advantages of allowing liberal input my experience has been that it’s usually best to start strict and loosen up later. Trying to deny values or data that people have become accustomed to is a challenge. The push back may be too much to overcome, meaning the producers must live with that data forever or try to slowly deprecate it.

In extreme cases systems are like submarines in the deep sea which must withstand constant, destructive pressures. Without careful management and design user behavior and contributions can become the tail wagging the dog. This can be beneficial in some cases; though, it can also lead to unmaintainable expectations. An example in the larger software ecosystem is seen as the free software movement advances and some users (at times myself included) come to expect software at very low or zero cost despite the costs involved in their production.

What do you think?

Moving To Windows For Speech Recognition

Practical speech recognition options for non-Windows operating systems are few. Yet after years of overuse my hands needed a break from the keyboard and mouse. After a few months with SphinxKeys on Linux, some experiments with Simon Listens, and reading about the limitations of Dragon Dictate for Mac the only viable option was to return to Microsoft’s OS.

As a child in the early 1990’s I grew accustomed to Microsoft‘s DOS and consumer editions of Windows. College and a job at a very Apple-friendly company led to spending a lot more time with Linux, enterprise Windows, and OS X. As newer versions offered speech recognition and text-to-speech I toyed with these features like everything else. Sadly those brief trials left me with the impression that they were not ready for everyday use. Years later, typing and mousing around had caught up with me. In late 2013 there was no denying it was time to revisit speech recognition, and much more seriously.

By this point I was years into Linux and loving it: powerful shells, federated package management, light resource usage, lots of software choices … besides voice input. While Linux has several speech tools they all seemed impractical:

  • IBM’s ViaVoice was sold and died out
  • Palaver sends voice data through Google, incompatible with my job requirements
  • Platypus didn’t work with my version of Dragon
  • Simon Listens was cumbersome and never worked for me
  • SphinxKeys only simulated keystroke input
  • Vedics didn’t compile and seems out-of-date

There are more options on Linux. Though, after trying so many I had already found more success with Windows.

Around this time an old copy of Dragon NaturallySpeaking (circa 2007) turned up at a local thrift shop. Spending some time with it revealed how useful the different modes were, showed the promise of the software development kit, and piqued my curiosity into the tools others had built on it. Sadly it didn’t support 64-bit and integration into existing software was very limited. Apart from Microsoft Office it didn’t have a lot to offer out of the box. Reviews of later versions seemed to reaffirm that the software wasn’t going to work for my needs.

Microsoft began offering Windows Speech Recognition with Windows Vista. And after using it for a few months on Windows 7 I can say it does a passable job with a good, properly configured microphone. Integration with built software like Internet Explorer and Windows Live Mail is solid. Other applications like Miranda IM work reasonably well too. Too bad most fall back to the annoying, if usable, dictation pad. Patience and persist help in the hunt for the most practical solutions.

WSR can be resource intensive. My computer’s memory usage climbs a bit. Things also get slower as I keep many programs open. Using a lot of tabs in IE or Firefox caused the most slowdown; making scrolling a chore. Underpowered computers like netbooks, Celeron-equipped laptops, or older desktops only served to disappointed. Your mileage may vary.

While WSR works alright as is it really needs customization options to fit a wider variety of workflows. There are a few tools out there:

The first two also offer versions that work with Nuance’s Dragon products which helps avoid lock in. At this point I’ve settled for WSR Macros with some AutoIt tweaks to get voice clicking without the mouse grid and other things.

Today my voice does about 15% of the work. It helps most with e-mail, instant messaging, blogging, clicking, and window management. After seeing Tavis Rudd‘s presentation on programming by voice I hope to achieve a similar proficiency. Until then the experiments will continue as time permits.

Have you ever tried speech recognition? What did you think? If you’d like to share please comment.

Please Don’t Use Vanity Versioning

As the version numbers of software and services have crept into the public conscience, the influence of marketing has moved into the numbering process. When version identifiers no longer communicate anything besides the passage of time or marketing campaigns they are just vanity numbers.

Identifying versions of software and services can be tricky business with increasingly longer strings of digits, letters, and punctuation. Consider a version such as “1.15.5.6ubuntu4”. Ubuntu or Debian package maintainers may feel right at home, but even software engineers like myself can get lost after the first or second dot.

Software versions often begin innocently enough: “1” being the first official version. Decimal digits afterward indicating incremental change. Changes to the significant digit were often significant, noteworthy changes in the software behavior, capability, and/or compatibility. Sadly I fear the marketing hype that accompanied the Web 2.0 movement and Google’s Chrome browser have increased the popularity of vanity version numbering.

Sequels to movies and games are common, and when you see a number next to the title it provides instant context. You know that there may be some back-story, content, or previous experience awaiting as you encounter the 3rd or 4th release of unfamiliar franchises. While numbers have fallen out of fashion in film and games, replaced with secondary titles, they served their role well enough. And releases within a franchise like Ironman are more individual products than versions of a single application like Internet Explorer.

Regardless, a user seeing “Opera version 15″ might not realize that upgrading from 12 means more than just the usual “better than before”. Version 15 saw Opera radically change in how it displays pages, handles e-mail, and the add-on capabilities available. This release was clearly introducing breaking changes; something I consider the most important thing versions should communicate. Yet their pattern before version 15 led users to believe the first number was not so significant.

Version numbers, or technically identifiers since they’re not always strictly numeric, can communicate a lot of different things:

  • Compatibility and incompatibility
  • Addition or loss of capabilities
  • Tweaks or fixes
  • Package information
  • Passage of time
  • Progress toward the first release (like 0.9 for 90%)
  • Revisions internal to the project
  • Start of a new marking campaign

I’d argue that compatibility, capabilities, and fixes are the most important; prioritized in that order. Which is why I think the Semantic Versioning concept is necessary. Despite being designed for Application Programming Interfaces, the behind-the-scenes components that make the ‘cloud’ and software tick, it is sorely needed in user-facing products like Internet browsers too.

Semantic Versioning’s goal of clearly communicating to machines and programmers can also help users understand potential consequences once they know the pattern. And it can be done easily, succinctly, and before they actually choose to update or install.

Ironically it’s the API’s which users do not see that tend to be the most semantic or consistent, at least in their end-point URI’s. These often contain the major version number clearly embedded like “v1” in “api.example.com/v1/”. My experience developing API’s has been that only the major version should be embedded in the URI, but minor and patch fragments can be useful for informational purposes.

One interesting hybrid scheme is Java. It’s major and minor version numbers are semantic with major technically remaining at ‘1’. And at least up until version 8, the latest as of this writing, it has remained largely backward compatible with the first official release. The minor version increases as capabilities are added: 1.1, 1.2, 1.3 … 1.8. Yet since 1.2 it has been marketed using only the minor number. Articles referring to “Java 7” or “Java 8” are technically referring to 1.7 or 1.8 respectively. Sadly the patch (a.k.a. update) version for Oracle’s official releases have gotten complicated.

If you are one of those privileged individuals choosing a version number please don’t get caught up in the hype. Let the needs of your users determine what is appropriate. And as I have the opportunity I’ll try to do the same.

Have you chosen a version identifier? Do you have some thoughts to share? If so please consider commenting.

Migrating From Apple OS X To Linux

Making the move to Linux from OS X was surprisingly easy in 2007. Being a software engineer with some limited Linux experience certainly helped. Choosing a user-friendly distribution with obvious customization tools, Kubuntu in this case, also facilitated the migration.

Before taking the plunge I had unknowingly worked with Linux servers in college. Later some coworkers had installed it on some computers I was trying to refurbish. And the experience of trying to remove Red Hat Linux reminded me of trying to purge a virus. (It had been taken hold of the computer disk’s boot record and boot partition; both foreign concepts to me.)

Yet with time I became strangely drawn to the idea of a free OS that was immune to most computer viruses. So I tried dual booting both Windows 98 and Red Hat Linux 9 on my laptop for a while. Though it was mostly just an experiment since it didn’t run most of the computer games software necessary for school that I wanted to use.

School and work had also introduced me to the world of Apple’s OS X. While the one-button mouse was an annoying limitation, almost everything else about it was easy and intuitively obvious despite years with Microsoft’s products. Having used only OS X at the office for a few years made it feel more and more like home.

After reading about Ubuntu’s goal of becoming a more user-friendly distribution of Linux I decided to try using it as a work desktop instead. Kubuntu’s Windows-like layout felt familiar with it’s task-bar, start menu, and default shortcuts. Although, OS X’s Command key serving as the modifier key for application switcher, copying, pasting, and quitting became second nature. Thankfully Kubuntu’s KDE interface allowed me to use my keyboard’s Command key (called ‘meta’ in Linux) to serve most of the same functions as it had with OS X.

At the time Kubuntu’s applications got the job done, if awkwardly. K-mail in particular was awkward compared to Apple Mail. Sadly the next best alternative, Evolution, crashed too frequently for comfort. Despite the lack of slick integration and polish, applications worked well enough:

  • Firefox was more or less the same
  • Java worked similarly and with fewer quirks
  • K-mail had GPG encryption support
  • OpenOffice ran better
  • Plenty of terminals were available
  • Subversion worked fine

Over the years I upgraded with releases, tried a few betas (not a good idea for one’s primary workstation), and moved on to Gnome-based Ubuntu and XFCE with Xubuntu. Quirks along the way included post-upgrade problems requiring re-installation, hardware incompatibilities, software incompatibilities, and differences in hot-key configuration.

Ubuntu distributions have also increased system requirements over the years. What was once nice and snappy on a 2004-era desktop with only 650 MB of RAM became almost unusable by 2009. Ubuntu’s move to the Unity desktop has also played a role. Despite its similarity to OS X I found Gnome-2-esque or Windows-like desktop environments more comfortable.

Looking back after seven years I’m mostly satisfied with Linux’s performance as a desktop OS. But at times it certainly required persistence and willingness to learn the terminal to resolve quirks. In recent years terminal usage has been less and less necessary. Hardware vendor support has also made it more practical.

For the sake of user freedoms I hope it can someday satisfy all desktop users; though, as a software producer I have doubts about the impact mature and free software like Linux will have on the labor market and price expectations.

Have you tried–or considered–Linux on your desktop? If you’ve something to share please consider leaving a comment.

Add-ons, A Blessing Or Curse?

Software add-ons, a.k.a. plug-ins or extensions, offer the promise of more capability beyond the core package. Though the cost for such expandability isn’t free. In my experience they can be both a blessing and a curse.

Add-ons for projects like the Firefox browser and WordPress have benefited both the users and makers. Doing so keeps the core lighter and simpler and without sacrificing the flexibility and customization that have made them so popular. For example, paranoid browser users like myself can supplement built-in features with things like Disconnect or NotScripts.

Once I administered a website using Drupal’s content platform. It was a lot of fun to browse their modules section looking for things that would help enhance the site. However, with Drupal 5 and 6 upgrading meant disabling those modules first, installing the update, then re-enabling them. And some weren’t compatible or didn’t update themselves properly. It was also a manual process, even with the Drush tool helping me along the way.

Drupal began encouraging module authors to offer guarantees they would support the next major version. But I had already been burned. Newer versions may have improved the situation, but a friend and I moved the site to another platform instead.

Add-on advantages typically include:

  • Expandability where it’s otherwise impractical (because of the license, platform, etc.)
  • Lighter, simpler core
  • Customization apart from a vendor’s built-in capabilities
  • Allows the core to be free and open while premium features are sold separately

Add-on disadvantages often include:

  • Installing, upgrading may be more complex than without
  • Add-on interfaces (for programmers or end-users) can be limiting and awkward
  • Maintenance of add-ons may lag far behind the core, hindering core upgrades
  • Development costs to produce and maintain add-on interfaces and ecosystems
  • Additional security risks as the number of vendors involved and attack surface increase

One of the most famous add-ons is Adobe Flash Player. Flash provided a boon to browser-based games which only recently is being overtaken by newer built-in browser capabilities like more advanced Javascript and HTML5 features. It has also provided media playback for video and audio. Yet I have found it to be buggy, awkward, and inaccessible at times.

As a software producer the ability to build upon existing platforms helps avoid building from scratch. Often it’s useful as a means to prototype ideas or experiment. Though, the risk of platform upgrades breaking one’s work is ever present. On one project I found myself spending about 2-4 hours each week keeping some add-ons up-to-date with core changes coming from upstream.

Looking at the big picture my experience with add-on’s has been generally positive. They have allowed me to tailor software for according to personal preferences and needs; often far outside the intent of the original vendor. While the disadvantages have discouraged me from using one platform, they aren’t enough to outweigh the benefits.

How about you? What’s your experience with add-ons, plug-ins, modules, and the like? Please consider commenting.

Why I Regret Making Free And Open Source Software

Looking back on the time and energy I poured into making free software leaves me with regret. From the whole experience I gained perhaps some minor notoriety, a few entertaining IRC chats, and the realization that ‘free’ doesn’t pay for itself.

Being a teen who loved game technology and Star Wars, I spent thousands of hours combining them to contribute to a game modification known as Star Wars Quake: The Call Of The Force. While I enjoyed the work at the time, it became obvious looking back that I can never directly profit from it because:

  1. It’s already been released for free
  2. Trademarks belong to someone else

Others have successfully turned their game-modding hobbies into careers or products. I wish them, and those seeking the same path, all the best.

Regardless, I don’t regret making mods. What I regret most is spending so much time doing work with little or no hope of reimbursement. Ten to twenty hours a week over five years is a lot of time. Had it not been so much time and effort, led to a job, or if it could have been sold then I’d feel differently. So my advice to would-be producers/modders is to be very careful before working with another company’s property and consider the consequences before releasing any of your work at no cost.

As a user of software, the abundance of free software is undeniably a win: not only can it provide value at no cost, but there are often several, zero-cost solutions. And it’s increasing obvious that free and open software is gaining serious popularity. Most troubling to me are the all-software-should-be-free expectations of users and the anti-proprietary culture demonizing producers who choose a different path. Despite making a living producing proprietary software I too found myself frowning upon non-free or non-open software at times.

Cases can, and have, been made for why open and/or free is the best way for some endeavors; such as non-profits or governments. Yet there is still value to consumers in paying for the use or a copy of software instead of only it’s initial development. For example, I personally find paid editions of software more pleasant than the minefield of ads and toolbars increasing common in otherwise free software.

There is also plenty of room for compromise. A few possible hybrid approaches include:

  • Vendors could agree to release source code when a product reaches end-of-life
  • Consumers could opt to pay a premium price for editions with source included
  • Source could be provided upon request, with or without redistribution rights

The trend seems to be that software producers feel pressure to move from standalone products to providing services. Anecdotal evidence also indicates that free and open isn’t a the be-all-end-all solution: SugarCRM moved to closed source, OwnCloud pushes paid editions and services, Google’s MyTracks has stopped releasing sources, and Google has abandoned the open-source editions of many stock Android applications. Nicholas Carr does a reasonable job summing up the tension programmers experience as the free and open software movements march onward. You can also find out more about the labor issues from Ashe Dryden’s post.

What do you think? If you’ve something to share please consider leaving a comment.