Category Archives: Programming

Special Treatment For Chrome Makes Everyone Else Second Class Web-citizens

As a web developer I get it, testing against a bunch of different browsers is more work. Hard to justify the effort when there is this great, cutting edge browser pumping out features. One already used by a near super majority in the most lucrative markets. Even many of its most popular competitors are using it as a foundation for compatibility. And marketing too may see value in targeting Google’s Chrome browser.

Yet the reality is that unnecessarily straying from web standards, neglecting testing against other browsers, or delivering different experiences by browser ultimately pushes alternative users further and further away. This tyranny of the majority and exploiting “valuable signals” adds more roadblocks in front of those who are—or try to be—different.

As a user of an alternative browser myself it’s tiring getting opaque error messages, blank pages, or broken forms when all I want to do is some light reading and occasionally submit something. Switching to the dominant / ordained browser often reveals an easier flow, extra options, or an otherwise problem-free experience. But I don’t want to go back to a browser mono-culture.

One of the original aspirations in the web’s early days was that it enabled connecting people who were different or marginalized. A place where one could interact without being instantly judged by superficial qualities, a place outside the sameness bubble. Even if it’s just a small choice like using and supporting a different browser, let’s strive to fulfill that vision of diversity and inclusion.

Running Laravel’s Own Tests

After some exploring here are the steps to get Laravel Framework‘s own tests passing on Ubuntu Linux, or Window’s WSL2, with PHP 7.4:

# For older releases without PHP 7.4
sudo add-apt-repository ppa:ondrej/php
sudo apt-get update

# Install library and server dependencies
sudo apt-get install \
  memcached \
  php7.4 \
  php7.4-dev \
  php7.4-dom \
  php7.4-mbstring \
  php7.4-memcached \
  php7.4-mysql \
  php7.4-odbc \
  php7.4-pdo \
  php7.4-sqlite \
  redis-server

# Start local servers that tests rely on
sudo /etc/init.d/redis-server start
sudo /etc/init.d/memcached start

# Change directory to the framework or fork folder
cd framework

# Copy the test configuration
cp phpunit.xml.dist phpunit.xml
# Remove comments around Redis settings
sed --in-place --regexp-extended \
  --expression='s/(<!--|-->)//g' phpunit.xml

# Install PHP dependencies
composer update --prefer-lowest \
  --prefer-dist \
  --prefer-stable \
  --no-interaction

After all that it should be possible to run the tests with ./vendor/bin/phpunit. Then the usual flags can help to run ones own tests like --filter testMyNewFeature.

Easier Laravel DB Migrations With Zero Downtime

When Laravel is paired with a Mysql DB it can be increasingly difficult to make changes as the installation grows in popularity. While Mysql is getting better with its Online DDL there are still some limitations. And even with the latest online tools Laravel’s built-in migration scripts won’t consistently use them without specialized code. To make minimal-downtime changes easier I’ve helped create an adapter for Percona’s Online-Schema Change (PTOSC) and Mysql’s Online DDL called laravel-online-migrator (LOM).

Consider a Laravel DB migration adding a column: Schema::table( 'my_table', function (Blueprint $table) { $table->string('color', 64) ->nullable(); } ); To use PTOSC the queries have to be manually written as shell commands: pt-online-schema-change \ D=homestead,t=my_table,h=localhost \ --user=homestead --password=secret \ --alter "ADD color VARCHAR(64)" \ --execute Then it must be wrapped in a PHP function like exec, or run outside the normal Artisan migrate workflow. When done outside migrate a row must be inserted into the “migrations” table for each migration, unless Laravel’s built-in migrations will never be run.

Now with laravel-online-migrator the migration script can remain unchanged. When migrate is run the script is automatically changed from this PHP code$table->string('color', 64) ->nullable(); to this command pt-online-schema-change \ D=homestead,t=my_table,h=localhost \ --user=homestead --password=secret \ --alter "ADD color VARCHAR(64)" \ --execute and the command is run.

Before executing migrations the generated commands can also be reviewed for correctness with --pretend like this php artisan migrate --pretend Pretending can be helpful when one is unsure what the adapter will do. When using PTOSC that output can also be copied and pasted into a shell with the --execute flag replaced with --dry-run. Dry runs will confirm with PTOSC whether or not the command is ready before the original table is modified.

LOM tries to be flexible: not changing queries unnecessarily and supporting common ‘raw’ queries as well. So dropping a table won’t go through PTOSC, or if migrations rely on hand-written SQL then they should work without human intervention. For example a raw query like \DB::statement("ALTER TABLE my_table CHANGE fruit fruit ENUM('apple', 'orange')"); will be translated to a PTOSC command, while \DB::statement( "DROP TABLE my_table CASCADE" ); will remain unchanged.

Fine-grained control of which online tool–if any–is used can be found within the configuration file config/online-migrator.php, environment variables like ONLINE_MIGRATOR, and traits on the migration scripts themselves. For more see the documentation on usage. Also of note, the output of “php artisan migrate” will be more verbose in order to aid resolving problems with migration runs.

UPDATE 2019-02-05: Forgot to mention the convenience option doctrine-enum-mapping was included to make changing tables with DB enumerations easier. By setting its value to ‘string’ migrations can use Eloquent code to change enum-equipped tables, though yet not for changing the enum columns themselves.

If this has been helpful please consider commenting here or opening an issue or pull request on the project’s Github.

NOTE: All opinions and thoughts expressed here are my own and do not reflect those of my employer.

“User-Agent” Headers Holding Back The Web

Every time you visit a website the name and version of your browser is sent to the service. In fact with every requested image, video, and style sheet the same data is sent again and again. This not only wastes bandwidth, it also subtly encourages web makers to rely upon it as a shortcut to make services work consistently across platforms. Later browsers then include more tokens in their “User-Agent” header to maintain compatibility with these fragile services. Over time the header becomes larger and the web more brittle. For example, Internet Explorer 11 identifies itself as “Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko”. Can you tell which part communicates that it is Microsoft’s Internet Explorer?

Of course it’s impractical for every web site/service to test every possible combination of browsers and platforms. So those of us developing sites and services only test the most popular browsers at the moment. Over time this leads to a web which caters to a mix of the most popular browser of the past and present, depending upon the time any given service was last made. As more and more devices leverage HTTP for the Internet-of-things this problem may grow more complex. Web standards and feature detection can help.

With well defined standards and run-time detection of features it’s possible to avoid the trap of ‘sniffing’ the browser from it’s UA headers. And while cutting edge features and services may benefit in the short-term from taking the shortcut of browser detection, they can also leverage vendor-specific prefixes of features in flux. Once standardized the prefixes can be replaced with official and non-prefixed names.

My experience detecting significantly different platforms such as mobile or internet-of-things (IOT) devices do still have some valid uses for the UA header. But ultimately they may be better served by a new, simpler header or more platform-independent designs. Until then Mozilla’s recommendations are a reasonable place to start.

In recent years even the once-dominant Microsoft notes the weaknesses and problems with UA headers. Sadly, my experiments sending an empty or minimal UA header have found too many sites broken to recommend the approach to non-technical users.

How about you? What do you think of UA headers?

Unused Work Does Not Have To Be Discouraging

Soon after being hired my boss told the story of a project he worked on for a significant amount of time; like months. It never saw the light of day. Subconsciously I think I denied that would ever happen to me, at least not for any major work. Four years later I had not yet encountered such hardship. Yet soon enough that all changed.

Worse than seeing my work tossed, I had to make the call to discard a coworker’s serious effort. After a long delay a key component of the work had been lost. So instead I had to redo the entire project from scratch. Ironically enough my effort turned out to be doomed as well.

At the very end of the rewrite, with only one feature left, I discovered the platform vendor’s latest development kit lacked any encryption libraries. (Finding out so late was a rookie mistake on my part.) When they finally produced a suitable kit the platform had changed so much I couldn’t port my rewrite in a timely manner. So with much chagrin I rewrote it again with the suitable kit and all was well–except for my ego.

Despite wasted time and resources one can typically find something good whenever work goes unused. Over the years I’ve been reminded of a few:

  • It is a learning opportunity
  • Helps avoid getting overly attached
  • New ideas often accompany do overs
  • Practice
  • Redos are a chance to develop grit

Of course these rarely add up to match the lost time or money. But if the learning opportunities are maximized it can save a lot more in the future.

It can be especially frustrating for those of us who are technical to accept non-technical reasons for work to be mothballed. For us “business reasons” can feel so abstract and intangible. It’s almost as if it’s arbitrary and frivolous. Still, businesses exist to produce a profit, and even organizations have to make trade-offs when their resources are limited.

Until time travel is sorted out, forecasting client needs or project requirements will almost certainly remain an inexact science. While we wait for our future overlords to return let’s take solace by remembering the good that can be salvaged from the ashes of our abandoned work.

Programming As A Privileged Career

Some friends in more traditional careers like farming and manufacturing have opened my eyes to the privilege it is to have a job in software. Looking at the bigger picture reveals that programming for a living depends upon many other roles to enable such an abstract pursuit. Working from the bottom of Maslow’s hierarchy I can imagine these would at least include: food production, waste management, housing, medical care, police, a justice system, electricity production, hardware manufacturing, and transportation services.

One experience in particular stands out as a moment of awakening. During long drive on vacation conversation turned philosophical as my friend shared his perspective on disappearing skill sets maintaining expensive and old, yet very profitable, manufacturing equipment. Since he was experienced and flexible he was able to keep the machines running, but he encountered few as willing or knowledgeable within mechanics and electronics. While I’d like to be as adept at keeping my existing belongs chugging too, doing so in the face of increasingly –maybe unnecessarily– complex things makes repair and maintenance less practical. And sadly few can afford to be the repair experts when we consumers are so quick to replace them with new and shiny.

Farming in North America was a career for 90% of the population as late as the American Revolution. Now it has dwindled to about 1%. A highly specialized society has certainly broadened the choices for careers in the modern age. It has also increased the need for higher levels of education. And in scarce job markets the competition for work means employers can be selective.

Despite the downsides one sometimes faces as a software or services producer, it is still quite a privileged endeavor compared to many others. Next time I’m waiting in line for service I’ll have to remember all this. I’d rather not go back to the job behind the counter, and I certainly don’t want to make it any worse for those who have no choice.

Issue Tracking Needs Are Specialized

Different companies have different kinds of issues and ways of managing them, so I probably shouldn’t be surprised there are many software and service solutions available. And as an engineer it’s tempting to fall into one of two extremes: build it from scratch or buy something and deal with it.

Do-it-yourself was my preferred approach as a young and ambitious programmer. After all, what could be better than a custom-built solution? However, within five years of making and maintaining such solutions I found myself in the get-it-and-forget-it camp. Another five years later I think I’ve settled somewhere in the middle.

My first experience with issue-management systems was on-line support forums. These were simple message boards requiring a lot of moderation. Shortly thereafter I was exposed to a somewhat custom, Lotus-based solution. This time though, as a support technician handling the problems. Since the solution required Lotus, and only worked within the network, it seemed ripe for replacement with a web-based frontend. Studying web programming made me eager to try my hand at making something better.

Building custom intranet solutions came next with employment at a company branching out into other industries. It was interesting to see how the various needs could be met with custom software. At first simple comment systems were enough for employees to keep track of their customer complaints, notes, and follow-ups. Of course e-mail and IM were also being used heavily to supplement.

In time, however, maintaining so many different systems became burdensome for only two or three developers. Increasingly I also saw patterns in the various needs that also fit some existing solutions.

One of the first to be used in house was Trac, and it worked well for our needs initially. Integration with software repositories was nice. Some add-on’s for things like discussion boards worked well enough. The wiki, its integration, and its export options were my favorite features. At one point I even used the mantra “If it’s not in Trac it doesn’t exist” in a push to unify the disparate repositories of knowledge.

Over time though it’s simplistic reports and lack of built-in, multi-project support were too problematic. Other solutions were tried with varying levels of success and failure. One of the most lasting was Altassian JIRA and it’s wiki offering. Customization options did not appear obvious or immediately better to me at the time. Though, some in management were more familiar and appreciated it’s built-in flexibility.

Experiments along the way pushed the boundaries of such trackers; pressing them into service as the official source of truth, customer complaint tickets, feature tracking, road-mapping, to-do/task management, time tracking, and discussion. Ultimately I’ve come to realize that specialization is needed; even if it takes the form of creative use of, extension software for, or rethinking about existing solutions. And at times it is better to have many, separate solutions than keeping it all in one.

User Input Is Often Like Water, Finding All The Cracks

Makers of software and technology services must walk a delicate balance between allowing users the freedom to enter their choice of input without allowing compromise of the system. Traditionally systems have assumed users have the best of intentions; this can lead to positive emergent behavior and growth. But as more business has moved on-line so have thieves and other malicious users. It’s no wonder that malicious input is now the number one threat on OWASP’s top 10 list.

Water seeks the path of least resistance as it flows, making rivers crooked. Likewise as the volume of user input increases it also seeps into more and more areas of weakness. And as the developers of services address these weaknesses it often adds complexity that bends and contorts their systems. E-mail software has become increasingly complicated because of it how it is used, misused, and creatively adapted.

At times the data itself becomes deformed to fit within whatever bounds cannot be broken, similar to water filling a form. Twitter‘s 140 character limit has led to or expanded creative use of text including: hash-tagging (‘#’), at symbol (‘@’) nicknames, URI shortening services, etc.

Despite the advantages of allowing liberal input my experience has been that it’s usually best to start strict and loosen up later. Trying to deny values or data that people have become accustomed to is a challenge. The push back may be too much to overcome, meaning the producers must live with that data forever or try to slowly deprecate it.

In extreme cases systems are like submarines in the deep sea which must withstand constant, destructive pressures. Without careful management and design user behavior and contributions can become the tail wagging the dog. This can be beneficial in some cases; though, it can also lead to unmaintainable expectations. An example in the larger software ecosystem is seen as the free software movement advances and some users (at times myself included) come to expect software at very low or zero cost despite the costs involved in their production.

What do you think?

Please Don’t Use Vanity Versioning

As the version numbers of software and services have crept into the public conscience, the influence of marketing has moved into the numbering process. When version identifiers no longer communicate anything besides the passage of time or marketing campaigns they are just vanity numbers.

Identifying versions of software and services can be tricky business with increasingly longer strings of digits, letters, and punctuation. Consider a version such as “1.15.5.6ubuntu4”. Ubuntu or Debian package maintainers may feel right at home, but even software engineers like myself can get lost after the first or second dot.

Software versions often begin innocently enough: “1” being the first official version. Decimal digits afterward indicating incremental change. Changes to the significant digit were often significant, noteworthy changes in the software behavior, capability, and/or compatibility. Sadly I fear the marketing hype that accompanied the Web 2.0 movement and Google’s Chrome browser have increased the popularity of vanity version numbering.

Sequels to movies and games are common, and when you see a number next to the title it provides instant context. You know that there may be some back-story, content, or previous experience awaiting as you encounter the 3rd or 4th release of unfamiliar franchises. While numbers have fallen out of fashion in film and games, replaced with secondary titles, they served their role well enough. And releases within a franchise like Ironman are more individual products than versions of a single application like Internet Explorer.

Regardless, a user seeing “Opera version 15″ might not realize that upgrading from 12 means more than just the usual “better than before”. Version 15 saw Opera radically change in how it displays pages, handles e-mail, and the add-on capabilities available. This release was clearly introducing breaking changes; something I consider the most important thing versions should communicate. Yet their pattern before version 15 led users to believe the first number was not so significant.

Version numbers, or technically identifiers since they’re not always strictly numeric, can communicate a lot of different things:

  • Compatibility and incompatibility
  • Addition or loss of capabilities
  • Tweaks or fixes
  • Package information
  • Passage of time
  • Progress toward the first release (like 0.9 for 90%)
  • Revisions internal to the project
  • Start of a new marking campaign

I’d argue that compatibility, capabilities, and fixes are the most important; prioritized in that order. Which is why I think the Semantic Versioning concept is necessary. Despite being designed for Application Programming Interfaces, the behind-the-scenes components that make the ‘cloud’ and software tick, it is sorely needed in user-facing products like Internet browsers too.

Semantic Versioning’s goal of clearly communicating to machines and programmers can also help users understand potential consequences once they know the pattern. And it can be done easily, succinctly, and before they actually choose to update or install.

Ironically it’s the API’s which users do not see that tend to be the most semantic or consistent, at least in their end-point URI’s. These often contain the major version number clearly embedded like “v1” in “api.example.com/v1/”. My experience developing API’s has been that only the major version should be embedded in the URI, but minor and patch fragments can be useful for informational purposes.

One interesting hybrid scheme is Java. It’s major and minor version numbers are semantic with major technically remaining at ‘1’. And at least up until version 8, the latest as of this writing, it has remained largely backward compatible with the first official release. The minor version increases as capabilities are added: 1.1, 1.2, 1.3 … 1.8. Yet since 1.2 it has been marketed using only the minor number. Articles referring to “Java 7” or “Java 8” are technically referring to 1.7 or 1.8 respectively. Sadly the patch (a.k.a. update) version for Oracle’s official releases have gotten complicated.

If you are one of those privileged individuals choosing a version number please don’t get caught up in the hype. Let the needs of your users determine what is appropriate. And as I have the opportunity I’ll try to do the same.

Have you chosen a version identifier? Do you have some thoughts to share? If so please consider commenting.

Add-ons, A Blessing Or Curse?

Software add-ons, a.k.a. plug-ins or extensions, offer the promise of more capability beyond the core package. Though the cost for such expandability isn’t free. In my experience they can be both a blessing and a curse.

Add-ons for projects like the Firefox browser and WordPress have benefited both the users and makers. Doing so keeps the core lighter and simpler and without sacrificing the flexibility and customization that have made them so popular. For example, paranoid browser users like myself can supplement built-in features with things like Disconnect or NotScripts.

Once I administered a website using Drupal’s content platform. It was a lot of fun to browse their modules section looking for things that would help enhance the site. However, with Drupal 5 and 6 upgrading meant disabling those modules first, installing the update, then re-enabling them. And some weren’t compatible or didn’t update themselves properly. It was also a manual process, even with the Drush tool helping me along the way.

Drupal began encouraging module authors to offer guarantees they would support the next major version. But I had already been burned. Newer versions may have improved the situation, but a friend and I moved the site to another platform instead.

Add-on advantages typically include:

  • Expandability where it’s otherwise impractical (because of the license, platform, etc.)
  • Lighter, simpler core
  • Customization apart from a vendor’s built-in capabilities
  • Allows the core to be free and open while premium features are sold separately

Add-on disadvantages often include:

  • Installing, upgrading may be more complex than without
  • Add-on interfaces (for programmers or end-users) can be limiting and awkward
  • Maintenance of add-ons may lag far behind the core, hindering core upgrades
  • Development costs to produce and maintain add-on interfaces and ecosystems
  • Additional security risks as the number of vendors involved and attack surface increase

One of the most famous add-ons is Adobe Flash Player. Flash provided a boon to browser-based games which only recently is being overtaken by newer built-in browser capabilities like more advanced Javascript and HTML5 features. It has also provided media playback for video and audio. Yet I have found it to be buggy, awkward, and inaccessible at times.

As a software producer the ability to build upon existing platforms helps avoid building from scratch. Often it’s useful as a means to prototype ideas or experiment. Though, the risk of platform upgrades breaking one’s work is ever present. On one project I found myself spending about 2-4 hours each week keeping some add-ons up-to-date with core changes coming from upstream.

Looking at the big picture my experience with add-on’s has been generally positive. They have allowed me to tailor software for according to personal preferences and needs; often far outside the intent of the original vendor. While the disadvantages have discouraged me from using one platform, they aren’t enough to outweigh the benefits.

How about you? What’s your experience with add-ons, plug-ins, modules, and the like? Please consider commenting.