Judy, Ron, and all, Well, since you asked…
Judy is correct in remembering that I have written about this very
topic. I did an article on jewelry quality that was published in
August, 1999, in AJM, and on plating quality control in AJM in May
2000. I also produced a fair amount of consumer-oriented info (with
the help of Orchid members) for Blue Nile for their website, although
the company actually used very little of it.
As for a quality rating system, the topic has been discussed again
and again at MJSA over the last 10 years, and probably longer than
that. Committees have been formed, proposals discussed, and as far as
I know, that’s sort of been the end of it. When push has come to
shove, there hasn’t been enough consensus or financial support to
actually launch a quality-marking program.
The first problem, of course, is deciding what constitutes “good
quality.” What for one person is barely adequate, for another is
quite good. The extremes aren’t hard to define – most folks can
agree on extremely fine quality, and on very, very poor quality. The
problem is so much product falls somewhere in the middle. Where do
you draw the line?
For example, Ron mentions testing chains for strength. This is very
feasible – in fact, several papers have been presented at the Santa
Fe Symposium discussing methods for doing just this. The tricky part
in developing a quality mark is, how strong is strong enough? Do I
need a chain that can support 60 lbs., or is 40 lbs. good enough? Is
the chain that supports 60 lbs. inherently better than the one that
supports 40 lbs.? And which test do you use? Some chains will perform
better under some test conditions, and worse under other test
conditions. This type of testing is also destructive, so is not
suitable for one-off products. (As far as I know, chain strength can
only be determined by breaking the chain.)
Gathering this sort of “performance” data is certainly feasible, at
least in theory. Chris Corti of World Gold Council in London has done
some interesting work in this area: he presented it at Santa Fe
probably five years ago, if anyone is interested in looking it up.
With X-Ray fluorescence, you can test for underkarating (JVC already
provides this service, with support from MJSA, although I’m not sure
how many people are using it.) X-Ray fluorescence can also be used to
test for plating thickness. Ring shanks can be measured, and
on how much pressure it takes to bend it can be obtained
from physical testing or determined mathematically. Visual inspection
can be relied upon to evaluate whether stones have been set straight,
whether solder seams have been done correctly, etc.
One difficulty comes in explaining to the customer what this
objective data means. For example, would the consumer really find it
helpful to know that this chain suppports 100 lbs., while this chain
supports 30 lbs.? I think you’d have to offer some translation, i.e.,
you need a chain that has xxx tensile strength so the baby can’t
break it, but you don’t want it to support more than xxx lbs. because
that would allow you to be strangled by the chain if it caught on
something.
In the jewelry industry, such explanations have normally been left
up to the retailer, upon whom the bulk of consumer education
traditionally falls. And look how that has turned out: there is a
grading system for diamonds, with clearly defined quality standards
and lots of available to the consumer, and yet plenty of
poor quality diamonds are sold by retailers who assure uninformed
consumers that an I3 is a diamond offering beauty and good value.
(And maybe for some customers it is, as long as it is priced
appropriately. But that’s an argument for another day.)
Having more objective data about jewelry performance available to
the consumer would doubtless be a good thing. But it would cost
money. Someone has to test the materials. Someone has to write up the
results. Someone has to publish the results. Someone has to market
the product and get people to pay for it. Someone has to pay the
lawyers to defend the publisher when the inevitable lawsuits pop up.
(When you name names and say something is poor quality, you’re going
to get sued eventually, at least in this lawsuit-happy country. And
yes, Consumer Reports does get sued periodically by unhappy
manufacturers.) So how do you pay for all this? You can’t pay for it
with advertising, lest you introduce bias into the system. Who knows
how many customers there are willing to pay $100 or more a year to
learn how a particular company’s jewelry measures up? (I would guess
the number is significantly below the 100,000 mark.) So far,
manufacturers haven’t gotten excited about pitching in a couple grand
a year to pay for it.
Another hurdle: jewelry is frequently not a branded item. When I
walk into Zales, I don’t know who made which pieces. And unless I
know Chain Y came from Manufacturer X, I can’t apply the data, even
if it were made available. You certainly couldn’t just label it
“chains from Zales.” Those chains might come from a dozen different
manufacturers, some acceptable quality, some less good. If I’m Good
Quality Manufacturer who happens to sell chain to Zales, and you test
Poor Quality Manufacturers chain and give “Zales’ chains” an “F” you
better believe I’m going to be on the phone to my lawyer in the
morning.
A general-book is probably more realistically feasible.
I’d love to write such a thing: I just have to find someone
interested in paying me to do it. There are similar things
already out theRe: I have several guides to buying colored stones,
estate jewelry, etc. So it’s still a matter of leading the horse to
water.
Well, anyway, that’s my two cents worth. Aren’t you glad you asked?
Suzanne
Suzanne Wade
writer/editor
Suzanne@rswade.net
Phone: (508) 339-7366
Fax: (928) 563-8255