My latest webinar interview was with the owners of Saber Offroad, a small but increasingly well respected company focused on 4×4 recovery gear.
Over the 45m interview we discussed a number of topics and I learned a great deal, more than I expected to, and I’m sure the attendees on Zoom also left the session better educated.
An interesting and disturbing topic we covered was that of recovery gear rating and labelling. This is something I have written on before, but this time a new point was made.
Products such as kinetic energy recovery ropes, soft shackles, snatch rings and the like tend to be mass-produced in places like China. So while the brand name may be different, the factory may be common and often the materials are too; there’s only a few makers of synthetic ropes such as Dyneema. Yet there are differences in product design and quality control, not to mention service and warranty – so there is plenty of scope to differentiate based on quality. But how is the consumer to tell the difference between a box-shifter and a company actually focused on quality? Good question.
As an example, recovery gear ratings are entirely arbitrary. You can make a soft shackle or snatch strap and rate it anything you like. Now if you go too far consumer law kicks in – you couldn’t realistically rate a 5000kg strap at 15000kg because that’d be a clear breach of law, easily proven in the event of an accident, and the ACCC will come down hard on you if it is shown that your gear was over-rated and that led to an accident.
But what is “over-rated”? You could simply look at the breaking strength of the base rope, let’s say 9000kg, and say your product has that same strength. Which then ignores any splicing, joins, knots, effect of bending etc, let alone inconsistences in the base rope or product which in practice lead to a lower breaking strength.
The obvious solution is to test the final product. But there’s no requirement to do that. And if you do test, how many tests, and under what circumstances? How do you set your rating based on the results of the tests? The answer right now is any which way you like. Take the highest or lowest of your test set which could be 1 or 100, up to you.
So imagine this. You walk into a shop and there’s two, say, soft shackles on the shelf. One costs $45 and is rated to 10,000kg, and the other $55 and is rated to 9000kg. You’d be tempted by the cheaper, higher-rated one, right? Better value.
It’s not obvious, but they are both made from the exact same material, 12mm Dyneema for example, and therefore the breaking strength of the raw material is the same. But the design is subtly different. And the biggest difference is that the more expensive one has been tested, and a breaking limit set below the weakest results, whereas the “higher-rated” shackle hasn’t been tested, and the manufacturer just assumed its breaking strength would be the same as the base rope. I don’t think that’s right, as it puts at a commercial disadvantage guys trying to do the right thing with quality and safety.
So that’s the sort of discussion we had with the guys from Saber – watch/listen to it below, or use the links in the description to jump around as you wish.
And in the video below I explain more about recovery gear rating and why it needs to change, plus more in this blog post.