Git Product home page Git Product logo

apca-introduction's People

Contributors

xi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

myndex

apca-introduction's Issues

FORMAL OBJECTION: Issues pointing out flaws in examples here are closed without corrective action

This is a formal objection to the method and manner that the examples and analysis have been conducted and presented in this repo.

Issues posted here, pointing out the fundamental flaws in the examples and comparisons demonstrated in this repo, have been closed without corrective action.

  • The "analysis" shown on this repo makes unauthorized changes to APCA code and math/methods, and further conflates non-relevant issues and unsupported changes to math methods of both APCA and WCAG2.

  • The visual examples used in the "introduction" readme here are not following APCA guidelines, and are using inappropriate font weights which place contrast into the contrast constancy zone, which has the effect of minimizing the true differences, therefore misleading the reader.

    • Corrected versions of the visual examples are demonstrated in the fork's readme.
  • The APCA implementations, comparisons, and examples used in this repo do not comply with the requirements of the APCA minimum compliance document.

    • APCA Integration Compliance
    • The tool linked to by this repo is not following the minimum compliance and therefore is not compatible and in breech of license.
  • Some of the links to alleged APCA materials in both the introduction and the analysis pages here refer to legacy or early developmental notes, and many are not the canonical documentation, which will only serve to confuse or mislead. Key materials and tools are:

    • Why APCA Plain language overview of APCA.
    • Accurate Contrast with APCA A complete catalog of official APCA documentation, resources, and discussion, plus links to third party and peer reviews, and more.
    • APCA Demo Tool This is the canonical APCA tool. The tool linked in the repo is not a compliant version of APCA.

In Conclusion

I have made good faith attempts directly with xi (Tobias Bengfort) to correct the problems in this repo. Nevertheless, attempts to correct these shortcomings have failed, therefore this is intended as a final statement of non-conformance.

Best Regards,

Andrew Somers
APCA Research Lead
Myndex

Flare correction of 0.4 (40%)

The claim is made that a WCAG 2.1 luminance contrast formula with a flare correction of 40%, rather than 5%, gives high correlation to the results from APCA.

5% is already far too high.

40% means that black (#000) looks the same as color(srgb-linear 0.4 0.4 0.4) which is rgb(66.52% 66.52% 66.52%) or in other words, a pretty light grey.

That can happen, for example a dim projector in a bright room on a white screen, with the audience saying "we can't read the slides". But it doesn't seem a reasonable basis for a lightness contrast algorithm.

The observed correlation is interesting, certainly, but I don't see flare correction as a reasonable model to explain the correlation.

I also don't really see the WCAG luminance ratio (tweaked or not) as a useful model going forward, because luminance is not at all perceptually uniform. Simply put, a mid grey is no where near the middle in a black to white luminance ramp, while it is at the middle in CIE Lightness or OKLab Lightness or UCS16 J, all of which try to model perceptually uniform lightness.

xi claims that he can see no difference regarding examples demonstrating the deficiency of WCAG2 vs the demonstrated efficacy of APCA

Extension of #2 whcih you closed before I could comment.

The images you posted are far from obvious evidence, at least for me. I am not sure why you treat them as such.

So then you are claiming that, in the following two samples, the one on the LEFT is equally readable as the one on the RIGHT:

WCAG 3:1 examples

And you are therefore further claiming that, in the following two samples, one of these is substantially less readable

APCA 45 examples

Before I go further, I want to clarify that you are claiming that you see no difference between the top two samples, and the second set of samples. Because that is what you stated in issue #2 that you closed before I could comment.

I look forward to your response.

Is the term "ambient light" correct in this context?

Thanks for this APCA Introduction!

In your detailed analysis, you mention:

In this case, 0.05 is added to both values to account for ambient light.

FWIW, I had always presumed that tweak was just to avoid division by zero.

With your very helpful comparison plots, you mention:

I added a modified WCAG 2.x curve with an ambient light value of 0.4 instead of 0.05.

Might you add some exposition regarding how you picked .4 and how that is a factor for ambient light?

My understanding is that the main advantage with the APCA algorithm is using values tuned to LCD/LED instead of CRT. It also seems to me that the values APCA returns are geometrically spread out in a more useful range. That is, while the 2.x metric varies from 1 to 21, 7:1 is the highest referenced value, and at about 12:1 is hard discern hue at all (for text).

As interested as I am in a better contrast formula for WCAG3, being able to account for ambient light would be huge. There is interest with having facility/signage contrast requirements that are more sophisticated than what we currently have. Light meters used with photography provide an easy way to measure luminance, but I have never read a compelling explanation of how that is sufficient for print foreground/background.

Scaling should not be ignored part deux

New issue as you closed the other without resolution.

I have been trying to explain to you why you are wrong in a way that you can understand. Since you are a math major, try this:

How do you determine the hypotenuse (c)? Euclid instructs us:

a² + b² = c²

But you are then claiming that a + b = c, and this is not true.

The square root of (a² + b²) IS NOT equal to (a + b), yet this is what you are claiming with your monotonic argument.

But in the "analysis" you are doing, you are essentially making that assertion.

You are also comparing "non-perceptual lightness" of WCAG 2's luminance to the perceptual lightness curves of APCA. This is a fully invalid comparison. It is an "apples and apricots" comparison at best.

Your assertion that 0.4 is an ambient flare component is fully unsupported by any science. A trivial measurement of the actual flare in anything resembling a standardized environment demonstrates that this is far from the case, and has absolutely no basis in fact.

An Alternate Examples

Going back over the 2019 trials, I revisited an earlier model, and I just released it at DeltaPhiStar as a general purpose perceptual contrast algorithm.

   deltaPhiStar = (Math.abs(bgLstar ** 1.618 - txLstar ** 1.618) ** 0.618) * 1.414 - 40 ;
    
    // ** is equiv to Math.pow

You could remove the final scaling so that:

$$ (bgLstar^{1.618} - txLstar^{1.618})^{0.618} $$

But this is irreducible. You can not:

$$ ( bgLstar - txLstar ) $$

And hope to be in any way similar in result.

In short, you are stating that you are only "examining the mathematical properties" while ignoring the "visual science aspects" which is a spurious and incongruent argument when the math is specifically modeling the visual quantities.

Scaling should not be ignored

@Myndex raised some points in other issues.

From #3 (comment):

you are not calculating APCA the way APCA is intended to be used. You wave off the scaling which you consider unimportant, but which is key for perceptual uniformity. You can not just disregard aspects of APCA design because you don't like them.

From w3c/silver#651 (comment):

First, he makes claims that WCAG 2 and APCA are "not that different" but then instead of showing that in a way that would be clear (he can't as it is not true) he makes a gross modification to crudely and incompletely reverse engineer the APCA contrast curves

Existing examples are not useful to evaluate contrast formulas

Split from #2 by @Myndex:

Your examples as chosen are all fairly high contrast using a bold font. This does NOTHING to demonstrate the differences. Your examples are mostly in the realm of "contrast constancy", and not near the edge.

Your examples are poor, using a bold font, and using high contrast, the result is that your examples are above contrast constancy, and therefore do not demonstrate the important differences.

"All models are wrong....but some models are useful"

I see the recent addition of a truncated quote from George Box to support your position.

Here is a more complete quote from Box:

"It has been said that 'all models are wrong but some models are useful.' In other words, any model is at best a useful fiction—there never was, or ever will be, an exactly normal distribution or an exact linear relationship. Nevertheless, enormous progress has been made by entertaining such fictions and using them as approximations."

MY OBJECTION

You can not just hand wave the "all models are wrong" aphorism around out of context like this unless you are trying to sway laypersons unaware of the real contextual value of the statement. For the record, George Box was a brilliant statistician, though not a vision scientist. Nevertheless, statistics is the foundation of much of science, and particular in the science of modeling phenomena. We know all too well that no vision model is "absolute" but we do have models that are more than accurate enough to very useful work.

A model does not need to encompass 100% of the parameters to be useful. But the parameters it DOES use must be connected to actual characteristics / measured data. The ultimate goal is an irreducible simplicity, but never a naive approach.

What you label as the naive approach is simply incorrect in total. The 0.4 is the naive approach.

Your 0.4 is a "guess" as you youself called it, and thus is unsuppartable and notwithstanding. Nothing about the 0.4 addition is relatable to any true or measured phenomena. Again, it is nothing more than a brute force attempt to reverse engineer curves that are derived from measured phenomena.

There are ample data sets of measured vision characteristics that can be used to validate models. That might be a good place for you to start.

PARTING SHOT:

To translate the phrase "All models are wrong" from that of insider knowledge of statistics, to that of plain language for the layperson, the actual phrase should be:

"No model is perfect, but well developed models give useful results".
—Andrew Somers

Visual Examples are Incorrect and Misleading

The visual examples shown in the "introduction" are incorrect and particularly misleading, as they are using incorrect font sizes and weights, and therefore are nut actually comparing APCA to WCAG.

This problem was corrected in pull request #11, but that was closed without being merged.

The proper visual examples can be seen at the corrected fork of this repo

Spatial frequency is a underrepresented

@Myndex wrote in #7 (comment):

SPATIAL FREQUENCY is a primary driver of contrast, Your "mathematical analysis" strips all of the spatial frequency sensitivity out of the formula. Therefore your analysis is not valid.

Contrast is a perception and it is NOT only about the distance between two colors, yet all of your analysis seems to be continuing this wrong assumption, and this wrong assumption is a key deficit of the WCAG 2 contrast method.

Errata

I am quoting the "Missing Introduction" below, indicating misunderstandings and errors:

APCA was created by Andrew Somers (Myndex) and is currently being proposed for the next major version of the W3C Accessibility Guidelines

As well as for use in other standards and contexts.

An interactive demo is available at https://xi.github.io/apca-introduction/tool/.

Your demo as linked here is not fully compliant with APCA. Please see: https://git.apcacontrast.com/documentation/minimum_compliance

  • WCAG 2.x produces a ratio between 1:1 and 21:1. APCA produces a value roughly between -100 and 100.

More correct: APCA produces a value of 0 to 106 for dark text on a lighter background, and 0 to -108 for light text on a darker background.

  • The result of APCA is negative for light text on dark background. You will usually work with the absolute value though.

Developers have the option of returning a signed value to indicate polarity, OR returning a value with a string identifying the polarity. I.e.:

Signed value: 75 or -75
Text Ident: 75 BoW or 75 WoB
Text Ident: 75 LM or 75 DM
etc.

  • With WCAG 2.x, 29% of all color combinations meet at least level A, 14% meet at least level AA, and 5% meet level AAA. With APCA, only 23% of all color combinations meet at least level A, 11% meet at least level AA, and 3% meet level AAA. So APCA is stricter.

There is no contrast requirement for level A, so I'm not sure what you are referring to. Also, how did you derive these numbers? They don't seem correct?

Examples

Your examples as chosen are all fairly high contrast using a bold font. This does NOTHING to demonstrate the differences. Your examples are mostly in the realm of "contrast constancy", and not near the edge.

Evaluating a contrast algorithm is extremly difficult because contrast perception varies from person to person and also depends on the lighing conditions.

No not really, though there are contrast sensitivity impairments that do have an effect.

Ambient lighting affects contrast perception in that ambient is part of the driver of light adaptation, and the adapted level affects contrast perception, as does context. All of these issues though are part of the contrast matching experiments that instructed the curve shaping that was done in developing APCA, and are set to the "lowest common worst case".

Whether APCA is actually better than WCAG 2.x is therefore hard to tell.

Actually, it is prima facie evidence, and trivial to demonstrate. Here's an example:

ColumnCompareAll400

And here is a comparison for dark mode:

ColumnCompareAll400

I personally could not say from the examples above which one works better for me.

Your examples are poor, using a bold font, and using high contrast, the result is that your examples are above contrast constancy, and therefore do not demonstrate the important differences.

A rigorous scientific evaluation is not yet available.

??? There has been ample third party and peer review. Here are just a couple

It is true there are uninformed claiming otherwise, and they fully ignore the reviews that have been completed. Also, ACPA is the result of three years of development in the Visual Contrast subgroup of Silver, and under the oversight of the AGWG.

For one it was born out of my personal frustration with the original
documentation. Some important pieces of information (e.g. the actual algorithm)
get buried under all that text.

LOL. You never bothered to look in the folder labeled "documentation", the first file in the list is the algorithm. Not to mention that the JS file shows the algorithm plain as day, along with ample comments.

The original documentation also contains absolute statements like "APCA is perceptually uniform" and that the old algorithm produces "invalid results". This in my opinion is wrong as perceptual uniformity is an ideal that can never be reached completely. So I felt like there was room for a more balanced introduction.

You are by your own admission not a vision scientist. "Perceptually uniform" has a specific meaning in the context of the field, namely that that the delta value matches the perceived delta.

WCAG 2 contrast math is nowhere close to perceptual uniformity, as is plain to see in the above examples.

Perceptual uniformity means, in this context, that a lightest pair of colors at Lc 45 is just as readable as a darkest pair of colors also at the wsame Lc 45.

If you want to dig deeper, I recommend to start with the original WCAG issue and the documentation README.

Thread 695 is over three years old, and is not a good place to start. The documentation readme you linked to is fine, but the catalog of resources is https://git.myndex.com

Thank you for reading.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.