If you use open-source software, you might wonder, why all those changes and why you should migrate up to the next major versions.
Some don’t even (properly) migrate the minor version of the framework dependency to the latest available release within a reasonable time frame, which is usually trivial due to BC (fully backward compatible).
Once you are too much behind, it becomes harder and harder to maintain or even to balance different demands like "server requirements" vs "fixes/improvements" and so on.
You want to avoid this, and the following chapters are going into a bit more detail on the how-to.
If you use (software) dependencies in general, you are responsible for keeping them up to date. Especially when a site is exposed to the public, asserting it has all the necessary security adjustments and bug fixes applied is a key requirement.
In the PHP world, Composer usually handles the backend part of it, while npm/yarn and CO handle the frontend elements.
Frameworks aim to ease the process of hopping on to the next major release by supporting you with automation scripts and migration guides.
But there will always be some manual overhead involved. Doing those at the time other people around have similar issues and can even give you help is much easier than dealing with it years after the fact and no one around even knowing anymore what the issue is about that you are facing.
There are different dependencies usually, when in framework context
- Framework plugins/components (often 3rd party)
- Libraries (usually unrelated to framework, but can also be used by them)
As soon as composer cannot update anymore due to "unresolvable incompatibilities", e.g. framework using a different library version than your other dependencies that depend on the same one, you will be in dependency hell. Resolving this becomes more difficult over time, and can sum up refactor work to a level that it becomes difficult to verify the changeset. You don’t want to get into that issue, so best to try to early on tackle each released update in a reasonable time frame.
Well established is that it is better to keep up with the development of your dependencies in small pieces than trying to do everything at once after years of stagnation.
You can more easily do those smaller steps and verify them separately than having to deal with one huge upgrade and too many side effects or blockers.
In the end, even though it might feel more work at first, it is actually way cheaper this way and the benefit is on your side – as well as any user of the website(s).
Semantic Versioning can help here to decide the impact of certain upgrades.
Even though, this can be quite subjective and specific to the framework or library.
Some apply very strict semver to a point, that even a very theoretical edge case creates new majors and therefore forces users to upgrade quite actively.
Whereas others use a more pragmatic approach or define a public API vs internal API for major/minor decisions.
Depending on the ecosystem, both major and "BC breaking in minors" approaches can have their impact and need to be weighed against.
I have been a big advocate on framework semver, but so far no larger framework package has adapted that yet.
It basically uses 4 digits (as kind of supported by composer) with the 2nd to be a mini-major within the same framework major.
From experience I can say, that in CakePHP this sure makes the ecosystem much harder to adhere to the corresponding majors. Very likely this is also true for other frameworks + eco system.
Also, it can become quickly chaos which plugin version fits which framework version and requires a look into source code or detailed packagist data.
And on top: Once you have the need to major in an outdated version, you are forced to do minors even for BC breaking topics, as the major is now taken by the next framework one.
But to stay on topic:
If you have worked with the library long enough, you will know what to expect. Otherwise, the docs hopefully mention the strategy being applied.
Some frameworks like Symfony even explain in detail what they consider semver and BC promise.
Almost all frameworks also apply a FC (Forward Compatibility) strategy around majors.
Any larger change on the major level can already be part of the last minor(s) of the previous one as opt-in.
This way you can already be there months ahead, using the latest and greatest as well as making the "diff" towards the major minimal.
Then once you migrate, the only things left are the actual BC breaks that were not backported and some minor cleanup.
This is another important reason why to stay always up to date and also remove deprecations wherever you can during the "minor upgrade" cycle.
We will address this further down in more detail.
This is a term that refers to the upgrade process ideally being a constant flow of change, rather than only big ones every x years.
With composer these days it is as simple as running
composer outdated and
composer update to have upgrades coming on the "semver" level.
You can process and verify them as small chunks, and merge to master afterwards.
Usually this can be done every x weeks or even more often, depending on the quality checks in place (including tests, CI and QA).
Larger upgrades involving BC breaking changes or library majors can be isolated this way (first the noise, then the actual isolated major release).
It can be integrated on its own, the changeset here should be minimal and can be approved then easily.
Make sure to not hold off too long an any such majors, as the previous version is usually not maintained anymore, so you are also now missing out on improvements, bug fixes and in some cases even security fixes.
So try to break down updates into managable and verifyable chunks of changes, each with its own tests and QA.
I have been mentioning this in the past, specifically before each new CakePHP major got released.
Here we talk about it in a bit more general scope.
Shims – or in some cases also called polyfills (coming from JS terminology) – usually provide temporary functionality to ease the upgrading process.
There are usually 2 ways of applying them:
- Forward, by using things that would otherwise not yet be available to you (both by major or PHP version).
- Backword, by keeping certain things the way they were.
FC (Forward Compatibility)
This is the pro-activate approach.
If you know months beforehand, what kind of things change, you can backport them as FC shims.
The most prominent ones out there are for sure the polyfills of Symfony, providing PHP functionality way before actually using those versions on your system:
For CakePHP, as example, there is the Shim plugin that does the same for e.g. float vs string handling of "float fields in DB".
This is the best approach if you already know that you want to upgrade.
It will take away this topic from the actual upgrade then, keeping the migration efforts at a minimum, providing you already with the improvements way in advance.
You can also isolate each topic and address it separately without larger side effects.
From an API perspective similar things are usually done, e.g. by Symfony:
- Provide a new method that will become the only one in the next major.
- Deprecate the old one with a clear notice that this one will be removed in the next major, nudging everyone to already use the new "future-proof" one.
This way you can also already prepare the codebase to be future compatible, and the actual changes around a major upgrade should be minimal in footprint.
As for removing deprecations in your application code:
You can first silence them after each larger update (e.g. to the latest framework minor) to get things running again.
Often, you can set
E_ALL & ~E_USER_DEPRECATED or for phpunit.xml use
<ini name="error_reporting" value="16383"/>.
Once everything runs again, is tested, and possibly already deployed, you can slowly tackle those deprecations and in small steps refactor as needed.
Every deprecation type and related small adjustments can more easily be reviewed and pushed to master. Side-effects like "broken functionality" or BC breaks on your production server are very unlikely this way compared to a "one-time big bang" alternative.
In the end, you will only have things left that cannot be touched without BC breaking impacts and will have to remain until a larger refactoring is scheduled.
It is just important to try to keep this list minimal, it does not have to be empty at all costs.
BC (Backward Compatibility)
Here we mean the re-active approach.
Often, it can be tedious to fix up all the changes required by a major upgrade to make it just "work again".
You want to have a PR in the end with a set of verifiable changes that you can merge and continue working on again.
Especially for larger projects having a lot of small upgrade changes on top, will also make the review and finding and asserting the actual "critical" changes needed even harder.
In my case, there were tons of validation rules that would have needed to be rewritten.
I just used a shim to auto-merge the old definitions (class property) into the new ones (method declaration) at runtime, allowing me to move thousands of lines of changes into the future.
The same was the case for relation (association) setup of models (table classes) or providing dependencies.
In general, you should always try to do FC first. Let it stabilize for a while, then do the actual upgrade and tackle as much as possible. But it is OK to offload some of the topics into the future.
First, make sure, the actual code works again fine, then approach further deprecations and shim removals.
The less code you have to touch and modify for the final "big chunk" of upgrades the less chance you modify it in a way that actually introduces breaking behavior.
Classic issues are to accidentally remove
! or invert some condition by upgrading to new methods, not seeing that this now actually changes how the application works.
And as we know, in reality not even half of the functionality is covered by tests, so in the end, this might go unnoticed for a while, maybe even until production.
So having a minimal diff/changeset to review makes things much easier here.
I mentioned this at the beginning: Automation can play an important role, especially with larger codebases.
Everything that is automatable, should be automated. Either by the framework itself, or the community.
A public tool that is used heavily by quite a few projects is Rector.
It has rulesets for almost all frameworks and libraries by now.
The only downsides for me so far were
- Runtime (huge memory consumption, conflicts in autoloading, local execution time)
- Order of execution (can easily break if done in a not expected/designed order)
With modern IDEs it is also easier and easier to automate certain replace parts (Shift+F6 for safe rename for example), "regex replace" where needed in batches or alike.
I used that a lot for CakePHP 3 to 4 update of applications, as my notes of the time show.
The "sed" tool is a locally working regex replacer which helped me a lot to quickly apply certain changes across the codebase where needed.
Although I do recommend a more sophisticated (rector/AST) based approach where possible, as those can also consider scope and relevance instead of blindly updating all matches in files.
I hope I could give a few insights into the upgrade process around frameworks and what approaches to best apply in order to ease the efforts as well as keep side effects minimal.
Bottom line: It is in most cases (especially actively maintained sites/apps) counter-productive and way more expensive not to use atomic upgrading.
If your company or boss doesn’t understand it, there are tons of real-life examples out there that state exactly that in facts and money (spent).
Feel free to comment on your experiences and cases below. Also if I forgot or overlooked something.