JHCCoffee You won’t be able to test accuracy (so reliant on manufacturer’s figs), only precision/repatability.

The R2 coffee is stated as accurate to +/-0.1%TDS, +/-0.05%TDS typical. The VST Lab II is accurate to +/-0.05%TDS (precision +/-0.02%), Lab III accurate to +/-0.03%TDS, precise to +/-0.01%TDS all figures for coffee, rather than espresso.

At filter strength 0.01%TDS equates to about 0.14%EY, or 0.03%TDS to around half a %EY. Precision much outside of +/-0.04%TDS starts to make things tricky over a sample of filter brews.

LMSC

I can probably do that comparison in the future, might be a while until we have free time at the roastery to do it properly. The R2 had slight variance but they were consistent, so if one unit tested 0.05 TDS higher it was reliably always 0.05 TDS higher

Coffee Roaster. Home: Sage Dual Boiler, Niche Zero, Ode v2 (SSP), 1zpresso ZP6 Work: Eagle One Prima EXP, mahlkonig e80s, Mazzer Philos and lots more

If my math is correct, if I have a 2:1 espresso brew ratio, and a .10 (10%) TDS, the EY is .20 which is 20%. If I have a 2:1 brew ratio and a .101 (10.1%) TDS, the EY is .202 which is 20.2%. So the EY impact of .1% accuracy at a typical espresso brew ratio is only 0.2%, which is meaningless?

The only thing is that I have to dilute the espresso sample by say 2:1 or I get an error code. But I can simply adjust the EY formula math for that by multiplying the resultant EY by 2.0

So my Reichert should be good enough. Am I missing something here?

    JHCCoffee Your math is fine, you can multiply the TDS by the brew ratio (for espresso as it’s based on beverage yield, not brew water poured). What you are maybe missing is that you seem to be confusing ‘resolution’ (the number output by the refractometer display) with ‘accuracy’ (the reading relative to a known datum, usually dehydration carried out by the manufacturer and fairly moot as far as we are concerned) and precision (the ability of the device to replicate that number when taking multiple readings of the same sample).

    This is the same for many devices, especially scales for instance, some are accurate to 0.1g, others read to 0.1g resolution but are only accurate to 0.4g.

    So, if you have an espresso that is 4x the dose weight and reads 5.05%TDS your EY is 20.2%, it could really be +/-0.2%EY if the device is +/- 0.05%TDS. No big deal, as you say. Or, +/-0.4%EY span if the reading is out by 0.10%TDS. Still useful.

    The higher concentration of 1:4 shots is working in your favour here, but as the concentration of the brews gets weaker for filter coffee, the precision gets worse. 1.40%TDS +/-0.05% at 1:14 (dose:beverage mass) is 18.9% to 20.3% for readings of the same sample. Repeated brews of the same coffee & method may span 2%EY or a tad over, You’re going to brew multiple coffees from varyious origins which will double that span (rule of thumb), still around 4-5%EY and the target range is typically 4%EY. Still doable, enough to be of some use. It doesn’t necessarily matter if your average is 19%EY or 21%EY, rather than 20%, other factors could have a bearing here. Precision is more useful for checking brew consistency.

    As you go farther out with the readings to say +/-0.10%TDS you now have a span of 3%EY for the same 1:14 filter sample, by the time you are considering different origins your range of readings is as wide as the likely extractions that are typically possible & much wider than the target region…not much use at all. If you are primarily concerned with espresso (especially if you can read the undiluted samples at around 1:2), this may not be of concern to you, filter generally requires better accuracy & precision.

    Thank you for taking the time to provide these insights @MWJB. Most helpful, for me and hopefully other current and future readers.

    JHCCoffee

    Hiya

    Why do you have to knock the test back?

    And if you are going to do this - use the same water that you extract with?

    All this is only for espresso too?

    Looking at this and wondering if it helps me set equipment up or is a ‘nice to have’

    @JHCCoffee has to dilute short ratio espresso samples because his model of refractometer is the ‘coffee’ version, not the ‘espresso’ version and has a limit to the concentration of the sample it will read.

    Typical brew water will be under 200mg/L, or 0.02%TDS on the refractometer (10.00%TDS espresso is 100,000mg/L, 1.40%TDS coffee is 14,000mg/L to put it in perspective). So, sure use the same water as you brew with but it won’t skew readings for espresso if you don’t. I think most people would by a refractometer that covers both espresso & filter (but to be honest, I also bought the coffee version of the VST Lab II for the higher precision for brewed), or just espresso if that is their main focus, rather than dilute shots for sampling. An ‘espresso’ refractometer will read samples from regular shots without dilution.

    Thank you. Thinking about the DiFluid R2 discussed above for espresso but wondering how usefill it would be to me.

    Generally I get to try 2 or 3 different beans most days but now im seeing less different ones as time goes on as its ‘same old’ sometimes. Would be good to get these spot on

    • MWJB replied to this.

      NewBoyUK Think as a refractometer as a device for establishing & monitoring consistency, against your subjective liking score. It’s most useful for looking at big data, over lots of samples at a given method/ratio, rather than sniping beans/cups.

      Of course if your regular cups are trending towards, say low extraction accross beans, then this will help you identify that. Getting cups ‘spot on’ also means dealing with non-extraction related issues, like excess solids/silt that affect the cup, even at a reasonable extraction.

      What it does do, is help you understand the impact of grind adjustments and prevent you from 2nd guessing what is under/over-extracted if you haven’t previously had an objective datum (e.g. not all types of bitterness stem from over-extraction).

      As a engineer - been given all sorts of beans to set up with limited time to do this I dont think it’s going to help.

      However maybe if I was to take some away and have a more indepth trial - I should hit the spot maybe?

      Then when given that particular setup again I have a good start?

      I can see the usefulness for roasters and users who have the same beans for a while but doesnt seem a one hit wonder.

      Could you from the 1st test results get it ’right" on the 2nd of at least very close?

      • MWJB replied to this.

        NewBoyUK When you say ‘set up’, do you mean adjust grind for a known brew method/ratio/system? (E.g. 1:2, or other known/typical ratio espresso shots, or filter brews?

        You would be able to nominally dial in to a target %EY (maybe 19% +/-2% for espresso, or 20% +/-2% for filter), within a handful of grind adjustments. This would not, in itself, guarantee a great tasting result, but it would show there is no obvious objective, mechanical impediment in the owner/user then fine tuning to their preference point.

        The target range is typically 18-22%EY, but different roasts and origins extract more/less than each other, Brazils, Costa Ricans & Guatemala may taste best at the lower end, Kenya, Rwanda, Colombia at the higher, you can still get under extracted coffee at 19% and over-extracted at 21%, in terms of sensory perception depending on the bean. A 20% extraction, for example, would be an average target over a sample, not the best target in a ‘one size fits all’ scenario.

        I don’t really see the use for roasters specifically, they tend to brew/QC by cupping & rarely measure the dose (if SCA protocol), nor the water weight, frequently under-extracting coffee. You can under-develop roasts (in the sensory aspect) and still achieve an expected EY, To roast coffee so badly that it doesn’t normally extract in a filter brew would show a catastrophic failure & be pretty tragic (nevertheless it occasionally happens).

        In short, if the machine & grinder are known good models, without any obvious/detectable malfunctions and the coffee is ball-park roasted for the brew method (e.g. very light/filter roasts may need to be brewed at longer espresso ratios to satisfactorily extract), you’re probably not going to see any evidence of a problem that isn’t already presenting a clue elsewhere, like way too coarse/fine a grind, someone trying bizarrely short espresso ratios etc.

        If you always took a bean with you, that was consistently available and you were used to it, you could use this to dial in and say, ’I tested with my usual beans, dialled in & all is working well. ’If the customer says that they don’t like their beans with the set up you could tell in a brew whether there was an obvious extraction related reason why, if not then tell them to buy different beans :-) Beans are an ingredient, we buy different ones to explore different tastes, sometimes we just don’t like them when brewed normally. You can’t fix every bag by tweaking extraction. You are dealing with objective mechanical devices, you can use a refractometer to check their objective function, You can’t make everything taste great, in the same way you can’t fix someone’s CD player to make Tom Waits sound like Elvis.

        Basically I get thrown any old beans at me from £8/kg to £30/500g.

        I deal with 2 different machines and generally 4 different grinders. Its what bean gets chucked in thats the hard part.

        I have to do roughtly 1:3 to 1:4 for unknown (cheap) beans. If they are actually labeled and have contact details on the bags (more common blends) I usually call them and generally they are 1:2 to 1:3.

        Some are awful and spit out lol

        Just thinking/hoping theres a easier way to get the best out of whats given

        • MWJB replied to this.

          NewBoyUK Its what bean gets chucked in thats the hard part.

          Subjectively, in terms of expectation (which may not be realistic) I can see this being hard. Objectively, not so much, if the output is achieved in normal time & EY

          NewBoyUK I have to do roughtly 1:3 to 1:4 for unknown (cheap) beans. If they are actually labeled and have contact details on the bags (more common blends) I usually call them and generally they are 1:2 to 1:3.

          I don’t see why cheap beans need a longer ratio, unless it is just to mute offensive flavours/intensity. Cheap, overly roasted beans tend to extract easily enough. Yes, many roasters suggest 1:2 to 1:3 ratios, but the bean can’t do it all by itself, some depends on whether the equipment will allow those beans to extract in those ratios. This you will be able to establish with repeated use of the 4 grinders.

          You’re an engineer for the machines & grinders, not a fairy godmother for the roasting community. :-)

          MWJB The DiFluid is absurdly cheap for the spec it quotes, but I’ve not tried it and have too many refractometers that I don’t/can’t use, already.

          I wouldn’t say, “I lied.”, but blame unrestrained curiosity, late night wine drinking & ‘one click’ online purchasing…DiFluid R2 arrived today. I’ll post some thoughts over the coming week…immediately, I don’t like the spoon idea - 3ml pipettes recommended.

          I await your 👍👎

          I believe you can share results with people so may ask for a few…… if you dont mind

            NewBoyUK I’ll post my findings here, they will be for filter brews, If satisfactory for that, espresso won’t be an issue assuming all samples are syringe filtered.

            • LMSC replied to this.

              MWJB I’ll post my findings here, they will be for filter brews,

              Awesome mate! We are looking forward. :-)

              • MWJB replied to this.

                LMSC I’ve done 3 sets of 10 readings (V60 paper filtered drip, no syringe filtering of samples) with the VST LAB II and the DiFluid R2, Both zeroed at the same time & readings taken one device after the other in equivalent timeframe. I followed the VST protocol for the VST and the DiFluid protocol from their videos (the manual is not specific in this regard).

                Test 1:

                DiFluid average TDS after 10 reads 1.41, stdev. 0.007 (0.016 at 95% confidence).

                VST Lab II average TDS after 10 reads 1.40, stdev. 0.006 (0.013 at 95% confidence).

                Test 2:

                DiFluid average TDS after 10 reads 1.42, stdev. 0.007 (0.016 at 95% confidence).

                VST Lab II average TDS after 10 reads 1.38, stdev. 0.004 (0.010 at 95% confidence).

                So far, so good, the difference in span between readings didn’t exceed 0.06%TDS, as both devices claim +/-0.03% accuracy, this seems fine…

                Test 3 and the wheels seem to come off a little…

                DiFluid average TDS after 10 reads 1.48, stdev. 0.008 (0.018 at 95% confidence).

                VST Lab II average TDS after 10 reads 1.40, stdev. 0.007 (0.016 at 95% confidence).

                So the variation in readings between devices is not a constant interval, As both are quoted as accurate to +/-0.03%TDS accuracy something seems adrift here as the difference in readings differed by up to 0.09%TDS (I, and no influencers that I am aware of, have the facility to check refractometer accuracy.) , However, the precision of the readings for both seem acceptable.

                I then wondered whether the difference in protocol was causing the differences in the averages, so I repeated Test 3, using the VST protocol for both devices. So rather than plonking the hot sample onto the DiFluid lens and taking readings, I cooled the sample in an espresso cup then placed it on the lens.

                Test #3 repeated with VST protocol for both devices:

                DiFluid average TDS after 10 reads 1.44, stdev. 0.031 (0.069 at 95% confidence).

                VST Lab II average TDS after 10 reads 1.42, stdev. 0.005 (0.012 at 95% confidence).

                The DiFluid readings started at 1.50%TDS, dropping to 1.41, the VST only drifted between 1.41 to 1.42. But the DiFluid settled to 1.41. 1.41, 1.41 for readings 8, 9 & 10 (vs 1.41, 1.41 & 1.41 for the Lab II).

                So I feel I have to start again, it seems logical that the plastic casing of the DiFluid may not be allowing samples to reach a steady state as quickly, compared to the steel peltier dish on the VST.

                The DiFluid spoon (volume 0.65g) is a bit daft/messy. Printed instructions are scant, YT videos don’t offer any further info on taking readings,

                I’m a bit disappointed that I’m having to explore a previously unmentioned test protocol seeng that this has been in the field for some months, but I’ll see what occurs over the next few days.

                Data is here:

                VST Lab II vs DiFuid R2

                So for the difluid R2 i use the provided spoon to take the sample for filter and then put it on a tablespoon to let it cool down. The R2 definitely struggles to cool down the sample compared to the VST, temperature difference between zero and sample has a big effect on TDS (data from Jonathan gagne) with room temperature being 75f in the graph below

                Following the pocket science workflow gives the most consistent results and it is still usable for the VST to make sure the temperature is stable (https://pocketsciencecoffee.com/2022/12/07/my-current-refractometry-workflow/)

                Coffee Roaster. Home: Sage Dual Boiler, Niche Zero, Ode v2 (SSP), 1zpresso ZP6 Work: Eagle One Prima EXP, mahlkonig e80s, Mazzer Philos and lots more

                • MWJB replied to this.

                  InfamousTuba If this is DiFluid’s test protocol, they should say so somewhere?

                  You’ll see from my tests there was no issue with the VST precision using their protocol. I’m not sure why Gagne would be measuring samples at such high temperatures?