any problem going from 9.x (don't laugh) to 11 directly?

C Gray frankfenderbender at council124.org
Thu Feb 15 07:08:34 UTC 2018


Moving to something called stable means that the build (not the release) is stabilized. 
At Network General, Tektronix (Printing, Test&Measurement, and Graphics Workstations divisions), Intel, Cypress Semiconductor (EDA), Analogy (EDA), and Mentor Graphics (EDA), they all had development use verified tools and modules (rather than the newest ones) waited until a major release had 2-4 minor releases which fixed problems that users would end up debugging to make up for incomplete pre-release testing policies. [Note: new car models have more problems & glitches than ones recalled/fixed or return-model releases; it goes for anything manufactured in a world where fixing costs and the "customer is always right" has been mutated into "the customer is always ripe". ] 

In Test and QA (integrated in with Development) we were taught (by losing customers and costs/time/resources lost to fixing issues directly related to poor-planning, short-cutting, and thinly-testing) NOT to pass those inherent problems on to later and to end-users/customers, because it all comes back to you in the end... and esp. if the code was third-party or partner software which is still "green", i.e., untested by real-world users doing more real-world things than most release-builders have time or look-ahead imagination to test. It may be reaching a "not supported any more " status, however, that often means that it does need as much support as a major release. So, mixed in with dropping support way too soon, an almost-fetishized practice (of uncaring and 'could-care-less'ness learned from the profitable bad-examples of Microsloth and [cr]Apple, people now must own the fall-apart software, tools, etc. which that lackluster work ethic laziness yields.

Spaghetti code, non-modularity, platform-dependencies... they're all coming back as "computer science" is fully replaced by the rapid-release strategies of  "computer marketing". Planned obsolescence is a behavioral issue more than a materials problem, and as a view accepted tends to hurt you directly at a cellular level rather than indirectly at an update level. Tweaking ought to be left up to tweakers. Code should be better-[PRE]tested. We QA folks were usually hated by sales and marketing types, and at first by developers, however, before we were excommunicated, they actually saw that we were code-friendly and lengthened weekends without the tension and fret caused by back[ground/mind]ed debugging.

Think hard about going "new", because the processes which brought all of the past bugs are still creating new-and-improved ones unless, of course, those processes are considered part of the product, and their improvements and need-for-improvement are both heeded, implemented, well-documented, and verified as solved, which, the newer releases of practically all software these days are de-evolving back to a pre-Knuth/pre-Djikstra era.

If FreeBSD r11 doesn't show me less issues than r10 which should have less issues than r9, then the opposite may actually be true (or TrueOS). 
An absence of process improvement -- which must invariably also include and demonstrate an improvement of itself -- will only make more problems worse.
As timed-releases of flaws become "newer" and "newer", the time-tested fixing-of-flaws become older faster, approaching non-existence in the calculus of release policy.

The drive to eradicate real-world stability is lost to the fetish of "new"-ness. 
That is a capitulation to the extraction and exploitation of time.
As Orwell put it quite well, and which also covers the speeding of "now" into a disappearing "then", the "future" really ceases to exist at all:
	"Who controls the past controls the future. Who controls the present controls the past."
The squeeze between deleting the past by PCers and PCs 'transmerging humans' in the future is suffocating and the only way to stop to allow [an unsuffocated] thinking is to do just that.... 

My suggestion would be to go for 10.4! It's tested more to itself than 11 is to itself. Other peoples release habits have little to do with actual quality because they do not assure QA. Time allows people to use "the town square" approach to solving issues, and at least creates known "avoid" flags and "workaround" shares. Even if r11 will be better, let it demonstrate it first. There's not a shortage on upgrading and why do it all if the time resuscitating increases in its breadth and/or depth. Maybe check the areas where you'll most use 9 v. 10.4 v. 11, if tickets open-closed in those areas are indeed tracked, so you can see if an intended high-use module/function has had any changes to it and too what degree of change. If that information is missing in the build-release process, then the build-release process needs to be transcended as it now exists? Bugs need to be prevented... not discovered.

Maybe trust your experience rather than a release#-incrementing algorithm, esp. when it is followed by ".0".

Decide based on the rate of flaws rather than on a speed of releases, and consider that "change for change's sake" is a pastime (the passing of time) and investing habit of wasting your future time by "selling the past short".

best,
chris



More information about the freebsd-questions mailing list