Nice story. An even more powerful way to express numbers is as a continued fraction (https://en.wikipedia.org/wiki/Continued_fraction). You can express both real and rational numbers efficiently using a continued fraction representation.
As a fun fact, I have a not-that-old math textbook (from a famous number theorist) that says that it is most likely that algorithms for adding/multiplying continued fractions do not exist. Then in 1972 Bill Gosper came along and proved that (in his own words) "Continued fractions are not only perfectly amenable to arithmetic, they are amenable to perfect arithmetic.", see https://perl.plover.com/yak/cftalk/INFO/gosper.txt.
I have been working on a Python library called reals (https://github.com/rubenvannieuwpoort/reals). The idea is that you should be able to use it as a drop-in replacement for the Decimal or Fraction type, and it should "just work" (it's very much a work-in-progress, though). It works by using the techniques described by Bill Gosper to manipulate continued fractions. I ran into the problems described on this page, and a lot more. Fun times.
> You can express both real and rational numbers efficiently using a continued fraction representation.
No, all finite continued fractions express a rational number (for... obvious reasons), which is honestly kind of a disappointment, since arbitrary sequences of integers can, as a matter of principle, represent arbitrary computable numbers if you want them to. They're powerful than finite positional representations, but fundamentally equivalent to simple fractions.
They are occasionally convenient for certain problem structures but, as I'm sure you've already discovered, somewhat less convenient for a wide range of common problems.
> No, all finite continued fractions express a rational number
Any real number x has an infinite continued fraction representation. By efficient I mean that the information of the continued fraction coefficients is an efficient way to compute rational upper and lower bounds that approximate x well (they are the best rational approximations to x).
> They are occasionally convenient for certain problem structures but, as I'm sure you've already discovered, somewhat less convenient for a wide range of common problems.
I'm curious what you mean exactly. I've found them to be very convenient for evaluating arithmetic expressions (involving both rational and irrational numbers) to fairly high accuracy. They are not the most efficient solution for this, but their simplicity and not having to do error analysis is far better than any other purely numerical system.
> fundamentally equivalent to simple fractions.
This feels like it is a bit too reductionist. I can come up with a lot of example, but it's quite hard to find the best rational approximations of a number with just fractions, while it's trivial with continued fractions. Likewise, a number like the golden ratio, e, or any algebraic number has a simple description in terms of continued fractions, while this is certainly not the case for normal fractions.
That continued fractions can be easily converted to normal fractions and vice versa, is a strength of continued fractions, not a weakness.
Fractions do pose a non-trivial issue when they have to be converted to decimal representations. So that is indeed a weakness, although not a direct one. (You can argue the same for big decimals with a binary mantissa, for example.)
As to my understanding continued fractions can represent any number to as many decimal points as you need. So if you need π you can just calculate 2 decimal points and write 3.14
if you want to calculate π*10^9 you can calculate i.e. 11 digits and write 3141592653.58
I think this is what OP means and I am not sure why you do not agree.
But here continued fractions are used to progressively generate approximations to the true real number. So you have no control over denominator and as you mentioned repeated division is necessary for most numbers. In comparison, digit generation approach can be tailored to the output radix (typically 10). Division still does likely happen, but only in the approximation routine itself and thus can be made more efficient.
I agree though the article is about calculator app and user typically won't care if this is 10ns or 100ms to gen an output - it would look like an instant response anyway.
That's the issue, no? If you go infinite you can then express any real number. You can then actually represent all those whose sequence is equivalent to a computable function.
You are describing something that is practically more like a computer algebra system than a number system. To go infinite without infinite storage, you need to store the information required to compute the trailing digits of the number. That is possible with things like pi, which have recursive formulas to compute, but it's not easy for arbitrary numbers.
> That is possible with things like pi, which have recursive formulas to compute, but it's not easy for arbitrary numbers.
It is possible for pretty much all the numbers you could care about. I'm not claiming it is possible for all real numbers though (notice my wording with "express" and "represent"). In fact since this creates an equivalence between real numbers and functions on natural numbers, and not all functions are computable, it follows that some real numbers are not representable because they correspond to non-computable functions. Those that are representable are instead called computable numbers.
How would you get those numbers into the computer anyway? It seems like this would be a practical system to deal with numbers that can be represented exactly in that way, and numbers you can get at from there.
The way every other weird number gets into a computer: through math operations. For example, sqrt(7) is irrational. If you subtract something very close to sqrt(7) from it, then you need to keep making digits.
Continued fractions are very cool. I saw in a CTF competition once a question about breaking an RSA variant that relied on the fact that a certain ratio was a term in sequence of continued fraction convergents.
Naturally the person pursing a PhD in number theory (whom I recruited to our team for specifically this reason) was unable to solve the problem and we finished in third place.
Why unnecessarily air this grievance in a public forum. If this person reads it they will be unhappy and I'm sure they have already suffered enough from this failure.
Oh I don’t think of it like that - it was not a super serious competition and aside from some lighthearted ribbing there was certainly no suffering from any failure.
It's used with sarcasm / irony. In this use case, "naturally" implies the author intended to communicate one or more emotions from a certain narrow set of possibilities. That set includes:
- An eye-rolling, critical emotion - where they used up a valuable spot on the team to retain a person who ostensibly promises to specialize in exactly this type of problem, but instead they proved to be useless even in the one area they were supposed to deliver value.
- A emotion similar to that invoked by "c'est la vie". Sometimes this is resigned, sometimes this is playful, sometimes this is simply neutrally accepting reality.
Follow-up comments from the person who wrote it indicate they meant it in a playful sense of "c'est la vie", and indicated that the team found camaraderie and joy in teasing each other about it.
Sorry if this sounds a little bit like ChatGPT - I wrote it myself but at the point when one is explaining this kind of thing, it's difficult to not write like an alien or a robot.
It was an ironic twist of fate that we were preparing specifically for this type of challenge and, when presented with exactly what we had prepared for we failed to see the solution.
I think the other comment had an excellent breakdown of the various factors at play, so I will start by saying I fully endorse what was said there.
To highlight a key point: “naturally” is slightly humorous because it implies that while the outcome was ironic, it should almost be expected that an ironic bad thing happens. In addition, it signals my opinion on such situations more generally, whereas “ironically” is a more straightforward description of what happened that would add less humor and signal less of my personality.
I have been working on a new definition of real numbers which I think is a better foundation for real numbers and seems to be a theoretical version of what you are doing practically. I am currently calling them rational betweenness relations. Namely, it is the set of all rational intervals that contain the real number. Since this is circular, it is really about properties that a family of intervals must satisfy. Since real numbers are messy, this idealized form is supplemented with a fuzzy procedure for figuring out whether an interval contains the number or not. The work is hosted at (https://github.com/jostylr/Reals-as-Oracles) with the first paper in the readme being the most recent version of this idea.
The older and longer paper of Defining Real Numbers as Oracles contains some exploration of these ideas in terms of continued fractions. In section 6, I explore the use of mediants to compute continued fractions, as inspired by the old paper Continued Fractions without Tears ( https://www.jstor.org/stable/2689627 ). I also explore a bit of Bill Gosper's arithmetic in Section 7.9.2. In there, I square the square root of 2 and the procedure, as far as I can tell, never settles down to give a result as you seem to indicate in another comment.
For fun, I am hoping to implement a version of some of these ideas in Julia at some point. I am glad to see a version in Python and I will no doubt draw inspiration from it and look forward to using it as a check on my work.
It is equivalent to Dedekind cuts as one of my papers shows. You can think of Dedekind cuts as collecting all the lower bounds of the intervals and throwing away the upper bounds. But if you think about flushing out a Dedekind cut to be useful, it is about pairing with an upper bound. For example, if I say that 1 and 1.1 and 1.2 are in the Dedekind cut, then I know the real number is above 1.2. But it could be any number above 1.2. What I also need to know is, say, that 1.5 is not in the cut. Then the real number is between 1.2 and 1.5. But this is really just a slightly roundabout way of talking about an interval that contains the real number.
Similarly with decimals and Cauchy sequences, what is lurking around to make those useful is an interval. If I tell you the sequence consists of a trillion approximations to pi, to within 10^-20 precision, but I do not tell you anything about the tail of the sequence, then one has no information. The next term could easily be -10000. It is having that criterion about all the rest of the terms being within epsilon that matters and that, fundamentally, is an interval notion.
How do you work out an answer for x - y when eg x = sqrt(2) and y = sqrt(2) - epsilon for arbitrarily small epsilon? How do you differentiate that from x - x?
In a purely numerical setting, you can only distinguish these two cases when you evaluate the expression with enough accuracy. This may feel like a weakness, but if you think about this it is a much more "honest" way of handling inaccuracy than just rounding like you would do with floating point arithmetic.
A good way to think about the framework, is that for any expression you can compute a rational lower and upper bound for the "true" real solution. With enough computation you can get them arbitrarily close, but when an intermediate result is not rational, you will never be able to compute the true solution (even if it happens to be rational; a good example is that for sqrt(2) * sqrt(2) you will only be able to get a solution of the form 2 ± ϵ for some arbitrarily small ϵ).
> you will only be able to get a solution of the form 2 ± ϵ for some arbitrarily small ϵ
The problem with that from a UX perspective is that you won't even get to write out the first digit of the solution because you can never decide whether it should be 1.999...999something (which truncates to 1.99) or 2.000...000something (which truncates to 2.00). This is a well-known peculiarity of "exact" real computation and is basically one especially relevant case of the 'Table-maker's dilemma' https://en.wikipedia.org/wiki/Rounding#Table-maker%27s_dilem...
If one embraces rational intervals throughout, they can be the computational foundation and the ux could have the option of displaying the interval for the complete truth or, to gain an intuitive sense, pick a number in the interval to display, such as the median or mediant. Presumably this would be a a user choice in any given context.
The link at the end is both shortened (for tracking purposes?) and unclickable… so that’s unfortunate. Here is the real link to the paper, in a clickable format: https://dl.acm.org/doi/pdf/10.1145/3385412.3386037
Thanks for pointing that out. It should be fixed now. The shortening was done by the editor I was using ("Buffer") to draft the tweets in - I wasn't intending to track one but it probably does provide some means of seeing how many people clicked the link
Unrelated to the article, but this reminds me of being an intrepid but naive 12-year-old trying to learn programming. I had already taught myself a bit using books, including following a tutorial to make a simple calculator complete with a GUI in C++. However I wasn't sure how to improve further without help, so my mom found me an IT school.
The sales lady gave us a hard sell on their "complete package" which had basic C programming but also included a bunch of unnecessary topics like Microsoft Excel, etc. When I tried to ask if I could skip all that and just skip to more advanced programming topics, she was adamant that this wasn't an option; she downplayed my achievements trying to say I basically knew nothing and needed to start from the beginning.
Most of all, I recall her saying something like "So what, you made a calculator? That's so simple, anybody could make that!"
However in the end I was naive, she was good at sales, and I was desperate for knowledge, so we signed up. However sure enough the curriculum was mostly focused on learning basic Microsoft Office products, and the programming sections barely scraped the surface of computer science; in retrospect, I doubt there was anybody there qualified to teach it at all. The only real lesson I learned was not to trust salespeople.
Thank god it's a lot easier for kids to just teach themselves programming these days online.
Nice story. Thank you share. For years, I struggled with the idea of "message passing" for GUIs. Later, I learned it was nothing more than the window procedure (WNDPROC) in the Win32 API. <sad face>
> However I wasn't sure how to improve further without help, so my mom found me an IT school.
This sounds interesting. What is an "IT school"? (What country? They didn't have these in mine.)
Probably institutes teaching IT stuff. They used to be popular (still?) in my country (India) in the past. That said, there are plenty of places which train in reasonable breadth in programming, embedded etc. now (think less intense bootcamps).
Most of all, I recall her saying something like "So what, you made a calculator? That's so simple, anybody could make that!"
This literally brings rage to the fore. Downplaying a kid's accomplishments is the worst thing an educator could do, and marks her as evil.
I've often looked for examples of time travel, hints it is happening. I've looked at pictures of movie stars, to see if anyone today has traveled back in time to try to woo them. I've looked at markets, to see if someone is manipulating them in weird, unconventional ways.
I wonder how many cases of "random person punched another person in the head" and then "couldn't be found" is someone traveling back in time to slap this lady in the head.
So yeah, a kid well-versed in Office. My birthday invites were bad-ass, though. Remember I had one row in Excel per invited person with data, and in the Word document placeholders, and when printing it would make a unique page per row in Excel, so everyone got customized invites with their names. Probably spent longer setting it up than it would've taken to edit their names + print 10 times separately, but felt cool..
Luckily a teacher understood what I really wanted, and sent me home with a floppy disk with some template web-page with some small code I could edit in Notepad and see come to live.
As soon as I read the title, I chuckled, because coming from the computational mathematics background I already knew what it roughly is going to be about. IEEE 754 is like democracy in a sense that it is the worst, except for all the others. Immediately when I saw the example I thought: it is either going to be either a Kahan summation or full-scale computer algebra system. It turned out to be some subset of the latter and I have to admit I have never heard of Recursive Real Arithmetic (I knew of Real Analysis though).
If anything that was a great insight about one of my early C++ heroes, and what they did in their professional life outside of the things they are known for. But most importantly it was a reminder how deep seemingly simple things can be.
IEEE 754 is what you get when you want numbers to have huge dynamic range, equal precision across the range, and fixed bit width. It balances speed and accuracy, and produces a result that is very close to the expected result 99.9999999% of the time. A competent numerical analyst can take something you want to do on paper and build a sequence of operations in floating point that compute that result almost exactly.
I don't think anyone who worked on IEEE 754 (and certainly nobody who currently works on it) contemplated calculators as an application, because a calculator is solving a fundamentally different problem. In a calculator, you can spend 10-100 ms doing one operation and people won't mind. In the applications for which IEEE 754 is made, you are expecting to do billions or trillions of operations per second.
William Kahan worked on both IEEE 754 and HP calculators. The speed gap between something like an 8087 and a calculator was not that big back then, either.
Yeah I mean they were surely too old to support it. But the designers of IEEE-754 must have been aware of these systems when they were making the standard.
Precision in numerics is usually considered in relative terms (eg significant figures). Every floating point number has an equal number of bits of precision. It is true, though, that half of the floats are between -1 and 1. That is because precision is equal across the range.
Only the normal floating point numbers have this property, the sub-normals do not.
In the single precision floats for example there is no 0.000000000000000000000000000000000000000000002
it goes straight from 0.000000000000000000000000000000000000000000001
to 0.000000000000000000000000000000000000000000003
Subnormals are a dirty hack to squeeze a bit more breathing space around zero for people who really need it. They aren't even really supported in hardware. Using them in normal contexts is usually an error.
IEEE 754 is what you get if you started with the idea of sign, exponent, and fraction and made the most efficient hardware implementation of it possible. It's not "beautiful", but it falls out pretty straightforwardly from those starting assumptions, even the seemingly weirder parts like -0, subnormals and all the rounding modes. It was not really democratically designed, but done by numerical computing experts coupled with hardware design experts. Every "simplified" implementation of floating point that has appeared (e.g. auto-FTZ mode in vector units) has eventually been dragged kicking and screaming back to the IEEE standard.
Another way to see it is that floating point is the logical extension of fixed point math to log space to deal with numbers across a large orders of magnitude. I don't know if "beautiful" is exactly the right word, but it's an incredibly solid bit of engineering.
I feel like your description comes across as more negative on the design of IEEE-754 floats than you intend. Is there something else you think would have been better? Maybe I’m misreading it.
Maybe the hardware focus can be blamed for the large exponents and small mantissas.
The reasonable only non-IEEE things that comes to mind for me are:
- bfloat16 which just works with the most significant half of a float32.
- log8 which is almost all exponent.
I guess in both cases they are about getting more out of available memory bandwidth and the main operation is f32 + x * y -> f32 (ie multiply and accumulate into f32 result).
Maybe they will be (or already are) incorporated into IEEE standards though
Well, I do know some people who really hate subnormals because they are really slow on Intel and kinda slow on Arm. Subnormals I can see being a pain for graphics HW designers. I for one neither love nor hate IEEE 754, other than -0. I have spent far, far too many hours dealing with it. IMHO, it's an encoding artifact masquerading as a feature.
> what you get if you started with the idea of sign, exponent, and fraction and made the most efficient hardware implementation of it possible. It's not "beautiful", but it falls out pretty straightforwardly from those starting assumptions
This implies a strange way of defining what "beautiful" means in this context.
IEEE754 is not great for pure maths, however, it is fine for real life.
In real life, no instrument is going to give you a measurement with the 52 bits of precision a double can offer, and you are probably never going to get quantities are in the 10^1000 range. No actuator is precise enough either. Even single precision is usually above what physical devices can work with. When drawing a pixel on screen, you don't need to know its position down to the subatomic level.
For these real life situations, improving on the usual IEEE 754 arithmetic would probably be better served with interval arithmetic. It would fail at maths, but in exchange you get support for measurement errors.
Of course, in a calculator, precision is important because you don't know if the user is working with real life quantities or is doing abstract maths.
> IEEE754 is not great for pure maths, however, it is fine for real life.
Partially. It can be fine for pretty much any real-life use case. But many naive implementations of formulae involve some gnarly intermediates despite having fairly mundane inputs and outputs.
> IEEE 754 is like democracy in a sense that it is the worst, except for all the others.
I can't see what would be worse. The entire raison d'etre for computers is to give accurate results. Introducing a math system which is inherently inaccurate to computers cuts against the whole reason they exist! Literally any other math solution seems like it would be better, so long as it produces accurate results.
That's doing a lot of work. IEE-754 does very in terms of error vs representation size.
What system has accurate results? I don't know any number system at all in usage that 1) represents numbers with a fixed size 2) Can represent 1/n accurately for reasonable integers 3) can do exponents accurately
Electronic computers were created to be faster and cheaper than a pool of human computers (who may have had slide rules or mechanical adding machines). Human computers were basically doing decimal floating point with limited precision.
It's ideal for engineering calculations which is a common use of computers. There, nobody cares if 1-1=0 exactly or not because you could never have measured those values exactly in the first place. Single precision is good enough for just about any real-world measurement or result while double precision is good for intermediate results without losing accuracy that's visible in the single precision input/output as long as you're not using a numerically unstable algorithm.
The NYC subway fare is $2.90. I was using PCalc on iOS to step through remaining MetroCard values per swipe and discovered that AC, 8.7, m+, 2.9, m-, m-, m- evaluates to -8.881784197E-16 instead of zero. This doesn't happen when using Apple's calculator. I wrote to the developer and he replied, "Apple has now got their own private maths library which isn't available to developers, which they're using in their own calculator. What I need to do is replace the Apple libraries with something else - that's on my list!"
I wrote the calculator for the original blackberry. Floating point won't do. I implemented decimal based floating point functions to avoid these rounding problems. This sounds harder than it was, basically, the "exponent" part wasn't how many bits to shift, but what power of two to divide by, so that 0.1, 0.001 etc can be represented exactly. Not sure if I had two or three digits of precision beyond whats on the display. 1 digit is pretty standard for 5 function calculators, scientific ones typically have two.
It was only a 5 function calculator, so not that hard, plus there was no floating point library by default so doing any floating point really ballooned the size of an app with the floating point library.
The JVM based devices came years later. This was around 1998, with the 386 based blackbery pager that could only do emails over Mobitex, no phone calls. It even looked like a pager. At the time, phones were not so dominant, data switched over mobile only existed on paper, and two-way paging looked like it had a future. So we totally killed the crude 2-way paging networks that were out there. And RIM successfully later made the transition to phone networks. Wasn't till iPhone and android that RIM ran into trouble.
Sounds like he's just using stock math functions. Both Javascript and Python act the same way when you save the result immediately after subtracting two numbers multiple times, rather than just 8.7 - (2.9*3).
It's not even about features. Calculators are mostly useful for napkin math - if I can't afford an error, I'll take some full-fledged math software/package and write a program that will be debuggable, testable, and have version control.
But for some reason the authors of calculator apps never optimize them for the number of keypresses, unlike Casio/TI/HP. It's a lost art. Even a simple operator repetition is a completely alien concept for new apps. Even the devs of the apps that are supposed to be snappy, like speedcrunch, seem to completely misunderstand the niche of a calculator, are they not using it themselves? Calculator is neither a CAS nor a REPL.
For Android in particular, I've only found two non-emulated calculators worth using for that, HiPER Calc and 10BA by Segitiga.Pro. And I'm not sure I can trust the correctness.
I find that much of the time I want WolframAlpha for when basic arithmetic, because I like the way it tracks and converts units. It's such a simple way to check that my calculation isn't completely off base. If I forget to square something or I multiply when I meant to divide I get an obviously wrong answer.
Plus of course not having to do even more arithmetic when one site gives me kilograms and another gives me ounces.
qalc also tracks and converts units, and is open source (practical benefit: runs offline). I have it on Android via a Debian subsystem but just checked and Termux simply has it also (pkg install qalc)
Random example off the top of my head to show off some features: say it takes 5 minutes to get to space, and I heard you come around every 90 minutes, but there's differing definitions on whether space is 80 or 100 km above the surface, then if you're curious about the G forces during launch:
(the output has color coding for units, constants, numbers, and operators)
It understands unicode plusminus for uncertainty tracking, units, function calls like log(n,base), find factors, it will do currencies too if you let it download a table from the internet... I love this software package. (No affiliation, just a happy user who discovered this way too late in life)
It's not as clever as WolframAlpha, no natural language parsing or Pokédex functions (sometimes I do wish that it knew things like Earth radii), but it also runs anywhere and never tells you the computation took too long and so was cancelled
Qalculate! has been my go-to calculator on my laptop for years, very happy to have it on my phone now too!
And it definitely knows planet radii, try `planet("earth"; "radius")`. Specifically, it knows a bunch of attributes about atoms (most importantly to me, their atomic mass) and planets (including niche things like mean surface temperature). You can see all the data here: https://qalculate.github.io/manual/qalculate-definitions-fun...
If you're willing to learn to work with RPN calculators (which I think is a good idea), I can recommend RealCalc for Android. It has an RPN mode that is very economic in keypresses and it's clear the developers understand how touchscreens work and how that ties into the kind of work pocket calculators are useful for.
My only gripe with it is that it doesn't solve compounding return equations, but for that one can use an emulated HP-12c.
RealCalc Plus is great on the Android side.
If using iPhone/iPad/macOS, try BVCalc. Its RPN mode shows you the algebraic expression (i.e., using infix notation display) for each item on the stack, which both helps you check for entry mistakes and also more easily keep track of what each stack item represents. I haven't found another RPN calculator that can do this.
On Android, I just went straight to using an emulator of the HP42S that got me through engineering school in the early 90s. The muscle memory for the basics was still there, even if I can't remember how to use the advanced functions any more.
I still have my actual HP, but it seems to chew batteries now.
Proper ones are certainly usable for more than napkin math. I deal with fairly simple definite integrals and linear algebra occasionally. It's easier for me to plug this into a programmable calculator than it is to scratch in the dirt on Maxima or Mathematica most of the time if I just need an answer.
This relates to what I wrote in reply to the original tweet thread.
Performing arithmetic on arbitrarily complex mathematical functions is an interesting area of research but not useful to 99% of calculator users. People who want that functionality with use Wolfram Alpha/Mathematica, Matlab, some software library, or similar.
Most people using calculators are probably using them for budgeting, tax returns, DIY projects ("how much paint do I need?", etc), homework, calorie tracking, etc.
If I was building a calculator app -- especially if I had the resources of Google -- I would start with trying to get inside the mind of the average calculator user and figuring out their actual problems. E.g., perhaps most people just use standard 'napkin math', but struggle a bit with multi-step calculations.
> But for some reason the authors of calculator apps never optimize them for the number of keypresses, unlike Casio/TI/HP. It's a lost art. Even a simple operator repetition is a completely alien concept for new apps.
Yes, there's probably a lot of low-hanging fruit here.
The Android calculator story sounded like many products that came out of Google -- brilliant technical work, but some sort of weird disconnect with the needs of actual users.
(It's not like the researchers ignored users -- they did discuss UI needs in the paper. But everything was distant and theoretical -- at no point did I see any mention of the actual workflow of calculator users, the problems they solve, or the particular UI snags they struggle with.)
I'm the developer of an Android calculator, called Algeo [1] and I wonder which part of it that makes it feel like slow/not snappy? I'm trying to constantly improve it, though UX is a hard problem.
This seems to be an expression mode calculator. It simply calculates the result of an expression, which makes it like the other 999 calculators in the Play Store.
Classic algebraic calculators are able to to things like:
57 x = (displays the result of 57x57)
3 = (repeats the multiplication and displays 57x3)
[+/-] MS (inverts the result and stores it in the memory without resetting the previous operation)
7 = (repeats the multiplication for 7 and displays 57x7)
7 [1/x] = (repeats the multiplication for 1/7, displays 57/7)
It doesn't have to be basic arithmetic, this way you can do complex numbers, trigonometry, stats, financial calculations, integrals, ODEs etc. Just have a way to juggle operands and inverse operators, and some quick registers/variables one keypress away (see the classic MS/MR mechanism or the stack in RPN). RPN calculators can often be more efficient, although at the cost of some entry barrier.
That's what you do with the classic calculators. Often, you are not even directly calculating things, you're augmenting your intuition and offloading a part of the problem to quickly explore its space in a few keypresses (what if?..), give a guesstimate, and do some sanity checks on whether you're in the right ballpark, all at the same time. Graphing, dimensional analysis in physics, error propagation help a lot in detecting bullshit in your estimates as quickly as possible. If you're also familiar with numerical methods, you can do miracles at the speed of thought. Slide rules were a lot like that as well.
People who do this might not be your target audience, though.
Another app nobody has made is a simple random music player. Tried VLC on Android and adding 5000+ songs from SD card into a playlist for shuffling simply crashes the app. Why do we need a play list anyway, just play the folder! Is it trying to load the whole list at the same time into memory? VLC always works, but not on this task. Found another player that doesn't require building a playlist but when the app is restarted it starts from the same song following the same random seed. Either save the last one or let me set the seed!
pkg install mplayer
cd /sdcard/Music
find -type f | shuf | head -1 | xargs mplayer
(Or whatever command-line player you already have installed. I just tested with espeak that audio in Termux works for me out of the box and saw someone else mentioning mplayer as working for them in Termux: https://android.stackexchange.com/a/258228)
- It generates a list of all files in the current directory, one per line
- Shuffles the list
- Takes the top entry
- Gives it to mplayer as an argument/parameter
Repeat the last command to play another random song. For infinite play:
while true; do !!; done
(Where !! substitutes the last command, so run this after the find...mplayer line)
You can also stick these lines in a shell script, and I seem to remember you can have scripts as icons on your homescreen but I'm not super deep into Termux; it just seemed like a trivial problem to me, as in, small enough that piping like 3 commands does what you want for any size library with no specialised software needed
> Another app nobody has made is a simple random music player.
Marvis on iOS is pretty good at this. I use it to shuffle music with some rules ("low skip %, not added recently, not listened to recently")[0] and it always does a good job.
[0] Because "create playlist" is still broken in iOS Shortcuts, incredibly.
I'm pretty sure the paid version of PowerAmp for Android will do what you want, with or without explicitly creating a playlist.
I have many thousands of mp3s on my phone in nested folders. PowerAmp has a "shuffle all" mode that handles them just fine, as well as other shuffle modes. I've never noticed it repeating a track before I do something to interrupt the shuffle.
Earlier versions (>~ 5 years ago) seemed to have trouble indexing over a few thousand tracks across the phone as a whole, but AFAIK that's been fixed for awhile now.
I can recommend PowerAmp. I've been using it for over a decade and it's been pretty happy with updating my 20,000+ song collection and my 1,000+ song playlist that I sync with an graphical ssh/rsync wrapper (although I've actually been switching to an rclone wrapper, RoundSync, in the last few months).
My personal favorite feature that I got addicted to back when I was using Amarok in KDE 3 was the ability to have a playlist and a queue that resumes to the playlist when exhausted. Then I can listen to an album in order, and then go back to shuffling my driving music playlist when that's done.
Anything that just shuffles on the filesystem/folder level works for this. Even my Honda Civic's stereo does it. Then you have iTunes, which uses playlists, and doesn't work. It starts repeating songs before it exhausts the playlist.
Ah, the old “should a random shuffle repeat songs” debate. Haven’t thought about that in years.
I’m with you in that I think shuffle should be a single list of all songs, played in a random order. But that requires maintaining state, detecting additions and updating the list, etc.
Years ago, a friend was adamant that shuffle should mean picking a random song from the list each time, without state, and if that means the same song plays five times in a row, well, that’s what random means.
> I think shuffle should be a single list of all songs, played in a random order. But that requires maintaining state, detecting additions and updating the list, etc.
You should be able to accomplish this with trivial amounts of state (as in, somewhere around 4 ints).
As an example, I'm envisioning something based on Fermat's little theorem -- determine some prime `p` at least as big as the number of songs you have (N), then to determine the next song, use n := a*n mod p for fixed choice of 1 < a < p, repeating as necessary as long as n > N. This should give you a deterministic permutation of the songs. When you get back to the first song you've played, you can choose to pick a new `a` for a new shuffle, or you can just keep that permutation.
If the list of songs changes, pick new a, p, and update n to be the new position of your current song (and update your notion of "first song of this permutation").
(Regarding why this works: you want {a} to be a generator for the multiplicative group formed by Z/pZ.)
Linear congruential generators have terrible properties if you care about the quality of your randomness, but if all you're doing is shuffling what order your songs play in, they're fine.
Thanks!! I've been looking for an algo to draw pixel positions in a pseudorandom way only once. I didn't know a way to do it without storing and shuffling all positions. Now, I only need to draw a centered filled circle, so there might be a prime number for it, and even if the prime only does it for a given amount of points, I could switch to other primes until the circle is filled, and get an optimal and compressed single-visit scattering algo.
You may have mathed over my head, but I’m not seeing how it avoids playing already-played songs when the list is expanded.
Say I have a 20 song list, and after listening to 15 I add five more. How does this approach only play the remaining 10 songs (5 that were remaining plus 5 new)?
> Say I have a 20 song list, and after listening to 15 I add five more. How does this approach only play the remaining 10 songs (5 that were remaining plus 5 new)?
It doesn't. If you add 5 more songs, then the algorithm as presented will just treat it as if you're starting a new shuffle.
If you genuinely need to keep track of all the songs you've already played and/or the songs that you have yet to play, then I'm not sure you can do much better than keeping a list of the desired play order, randomized via Fisher-Yates shuffle each time you want a new shuffled ordering -- new songs can be appended to said list and shuffled in with the as-yet-unplayed songs.
One way to do it without retaining additional state would be to generate the initial shuffle for N > current song list. If the new songs' indices come up, they get played. You skip any indices that don't correspond to a valid song when it's time to play them.
This has some obvious downsides (e.g. an empty slot that was skipped when played and filled by a later insert won't be played), but it handles both insertion and deletions without replaying songs and you only need to store a single integer.
Eh, it depends what you mean by "works". If you mean that if you add new songs in the middle of playback, it doesn't guarantee that every song is played exactly once before any are repeated, sure, but you can't really do that unless you're actually tracking all of the songs.
Many approaches that guarantee that property have pathological behavior if, say, you add a new song to your library after each song that you've played.
I’d suggest the general solution: the machine can keep a list of the songs it has played, and bump the oldest entries off the list. The list length can be user configurable, 0 handles your truly random friend, 1 would be enough to just avoid immediate repeats, or it could be set to the size of the library. 100 could, I think, give you enough time to not notice any repeats I think, right?
I'm comfortable with "random play" meaning we're going to pick at random each time but I'm not OK with the idea that's "shuffle" shuffle means there were a list of things and we shuffled it. Rolling a D20 is random but it's not shuffling. Games with a random element deliberately (if they're well designed) choose whether to have this independence or not in their design.
A shuffle is type of permutation. There is room to disagree on the constraints on the type of permutations allowed and how they are made. Nevertheless, I 100% agree that sampling with replacement is not a shuffle.
While I agree with you, as soon as the semantics of “random” vs “shuffle” enter the conversation, lay people are lost.
To me “shuffle” is a good metaphor because a shuffled deck of cards works a specific way (you’d be very surprised to draw the same card twice in a row!)
But these things are implemented by programmers who sometimes start with implementation (“random”) and work back to user experience. And, for a specific type of technical person, “with replacement” is exactly what they’d expect.
If you let programmers do randomness you're in a world of pain.
On the whole programmers given a source of random bytes and told to pick any of 227 songs at random using this data will take one byte, compute byte % 227 and then be astonished that now 29 of the songs are twice as likely as the others to be chosen†.
In a class of fifty my guess is you're lucky if one person asks whether the random bytes are cheap (and so they should just throw away any that aren't < 227) showing they know what "random" means and all the rest will at least attempt that naive solution even if some of them try it out and realise it's not good enough.
† As a bonus in some languages expect some solutions to never pick the first song, or never pick the last song.
My favorite example of RNG misuse resulting in sampling bias is the general approach that looks like `arr.sort(() => Math.random() - 0.5)`.
> you're lucky if one person asks whether the random bytes are cheap (and so they should just throw away any that aren't < 227)
If you can't deal with the 10% overhead from rejection sampling (assuming your random bytes are uniform), I guess you could try mushing that entropy back into the rest of your bytestream, but yuck.
Wow, that's an abusive ordering function. Presumably this is a thing people might write in... Javascript? And I'm guessing Javascript has to put up with them doing this and they get a coherent result, maybe it's even shuffled, because eh, it worked in one browser so we're stuck with it.
In Rust this abuse would either "work" or panic telling you that er, that's not a coherent ordering so you need to stop doing that. Not certain whether the panic can only arise in debug builds (or whether it would detect this particular abuse, it's not specified whether you will panic only that you might if you don't provide a coherent ordering).
In C++ this is Undefined Behaviour and there's a fair chance you just introduced an RCE vulnerability into your codebase.
You must track the permutation you're stepping through.
E.g. you have 4 items. You shuffle them to get a random permutation:
4 2 1 3
Note: these are not indices, but identifiers. Let's say you go through the first two items:
4 2 <you're here> 1 3
And two new items arrive. You insert each item into a random position among the remaining items. E.g:
4 2 <you're here> 5 1 6 3
If items are to be deleted, there are two cases: either they have already been visited, in which case there's nothing to do, or they're in the remaining list, in which case you have to delete them from there.
I'm really enjoying the discussion on how shuffle means different things to different people (I personally prefer random, but implementing `shuffle` specifically sounds fun with all of this)
> You insert each item into a random position among the remaining items
Thinking about shuffle + adding, I would have thought "even if it's added to a past position", e.g.
`5 4 6 21 3` as valid.
What do folks expect out of shuffle when it reaches the end? A new shuffle, or repeat with the same permutation?
I think all of this depends on the UI presentation, but when “shuffle” is used, I think a good starting point is “what would a person expect from a deck of cards”, since that’s where the metaphor started.
I don’t think that provides a totally clear answer to “what happens at the end”, but for me it’d lean me towards “a new shuffle”, because for me most of the time a shuffled deck of cards draws its last card, the deck will be shuffled again before drawing new cards.
I haven't used it in a while (now using streaming...), But Musicolet (https://krosbits.in/musicolet/) should be able to do this. Offline-only and lightweight.
I'd love to hear more about this. What was the other one you found? I wrote Tiny Player for iOS and another one for Mac and as more of an "album listener" myself I always struggled to keep the shuffle functionality up to other peoples expectations.
That is Lark Player, but it has so many ads that I recently uninstalled and kept trying the recommendations in this thread. Foobar2000 uses a system modal to let you add folders but the SD card is locked by the system on that modal even after enabling permissions, other apps can access it without issues. Samsung music player can only add up to 1000 songs per playlist and there is no easy way to split my library. And I just found Musicolet that uses playlist and doesn't crash when adding my library, but it would be perfect if it could show the randomized order of the playlist, so it just jumps on random songs, it would be cool to know what's next and before. Winamp (WACUP) on desktop does this perfectly.
https://github.com/vanilla-music/vanilla >Note: As of 23. Jun 2024, Vanilla Music is no longer available in the Google Play store: I simply don't have time to comply with random policy changes and verification requests. Any release you see there is probably an ad-infested fork uploaded by someone else.
Mediamonkey allows me to just go to tracks and hit shuffle and then it randomly adds all my tracks to a queue with no repeats. You can do it at any level of hierarchy, allmusic, playlist, album, artist, genre etc.
Edit: I checked I can also shuffle a folder without adding it to the library.
Tried it, but seems to use the system's modal to add folders, that blocks the SD card folder due to "privacy reasons", even after giving the app permission to access all files.
Well, this one is on google. "Full filesystem access" is restricted to specific classes of apps like file managers, and replacement api is very shitty (have lots of restrictions, slower by orders of magnitude)
I've been working on it for what will be a decade later this year. It tries to take all the features you had on these physical calculators, but present them in a modern way. It works on macOS, iOS, and iPad OS
With regards to the article, I wasn't quite as sophisticated as that. I do track rationals, exponents, square roots, and multiples of pi; then fall back to decimal when needed. This part is open source, though!
I am seriously curious when it became not a violation of the principle of least surprise that a calculator app uses the network to communicate information from my device (which definitionally belongs to me) to the developer.
Where I am standing, that never happened, but that would require that a simply staggering number of people be classified as unreasonable.
You can't please everyone. Do note these are only sent when something goes wrong in the app. To give you an indication of how little I collect, for the last 30 days, I can see 3,700 app sessions (only includes people who opted into Apple's analytics), and 14 reports of exceptions within the app. That's fewer than 0.4% of users.
Well the 89 is a CAS in disguise most of the time which is mentioned in passing in the article.
But, I agree I almost never want the full power of Mathematica/sage initially but quickly become annoyed with calc apps. The 89 and hp prime//50 have just enough to solve anything where I wouldn’t rather just use a full programming language.
HiPER Calc Pro looks like and works like a "physical" calculator, I've use it for years to get effect. I also have Wabbitemu but hardly ever use it, the former works fine for nearly everything.
Can you tell me which emulator you're using? I loved using the open source Wabbitemu on previous Android phones, but it seems to have been removed from the app store, so I can't install it on newer devices :-/
> And almost all numbers cannot be expressed in IEEE floating points.
It is a bit stronger than that. Almost all numbers cannot be practically expressed and it may even be that the probability of a random number being theoretically indescribable is about 100%. Depending on what a number is.
> Some problems can be avoided if you use bignums.
Or that. My momentary existential angst has been assuaged. Thanks bignums.
Wait. Could we in principle find more ways to express some of those uncomputable numbers, or have we conclusively proven we just can't reach them - can't identify any of them in any way we could express?
EDIT: let me guess - there is a proof, and it's probably a flavor of the diagonal argument, right?
For all real numbers in bulk— You may call it a diagonal argument, but it’s just a reduction to Cantor’s original statement, no new proof needed. There are only countably many computable numbers, because there are only countably many programs, because there are only countably many finite strings over any finite alphabet[1].
For individual real numbers— There are of course provably uncomputable ones. Chaitin’s constant is the poster child of these, but you could just take a map of (number of Turing machine in any numbering of all of them) to (terminates or not) and call that a binary fraction. (This is actually not far away from Chaitin’s constant, but the actual one is reweighted a bit to make it more meaningful.) Are there unprovably uncomputable ones? At a guess I’d say so, but I’m not good enough to give a construction offhand.
[1] A countable union of (finite or) countable sets is finite. Rahzrengr gur havba nf sbyybjf: svefg vgrz bs svefg frg; frpbaq vgrz bs svefg frg, svefg vgrz bs frpbaq frg; guveq vgrz bs svefg frg, frpbaq vgrz bs frpbaq frg, svefg vgrz bs guveq frg; rgp. Vg’f snveyl boivbhf gung guvf jbexf, ohg vs lbh jnag gb jevgr gur vairefr znccvat rkcyvpvgyl lbh pna qenj guvf nf n yvar guebhtu gur vagrtre cbvagf bs n dhnqenag.
Typically, since pre-WWW UseNet days it's been used as a standard "no-spoiler" technique so that those who don't want to see a movie twist, puzzle answer, etc don't accidently eyeball scan the give away.
The point is that, in my estimation, the statement in the footnote is a good exercise (provided that you don’t already know it, that it’s not immediately obvious to you, and that you’re still into set theory enough to know what countability and the diagonal argument are). I was initially tempted to just leave it as such, but then thought I’d provide the solution under a spoiler.
Thanks for clarifying. I'm not that young anymore, but I haven't seen this sort of spoiler tagging since forever (assuming that I ever saw it), so I just really didn't know what was going on. Maybe a simple reference to ROT13 at the beginning of your spoiler would have helped.
Yes there's a proof. One flavor is that in any system for expressing numbers using symbols, you can show a correspondence between finite strings of symbols, and whole numbers. So, what works for whole numbers also works for things like proofs and formulas. I think the correspondence may be called "Goedel numbering."
If hypercomputation is possible, then there might be a way to express some of those uncomputable numbers. They just won't be possible with an ordinary Turing machine.
(If description is all you need, then it's already possible to describe some uncomputable numbers like Chaitin's constant. But you can't reliably list its digits on an ordinary computer.)
As for the other interpretation, "have we conclusively proven we can't reach them with an ordinary computer", IIRC, the proof that there are infinite uncomputable numbers is as follows: Consider a finitely large program that, when run, outputs the number in question. This program can be encoded as an integer - just read its (binary or source) bytes as a very large base-256 number. Since the set of possible programs is no larger than the set of integers, it's (at most) countably infinite. However, the real numbers are uncountably infinite. Thus a real number is almost never computable.
BTW: I tried constructing a new number that could not be computed by any other Turing machine using a variant of the diagonilization argument. Basically, enumerate all Turing machines that generate numbers:
Turing machine 1
Turing machine 2
Turing machine 3
...
Now construct a new Turing machine that produce a new number in which the first digit is the first digit of Turing machine 1, the second is the second digit of Turing machine 2, etc. Now add 1 (with wrap-around) to each digit.
This will generate a new number that cannot be generated by any of the existing Turing machines.
The bug with this argument (as ChatGPT pointed out) is that because of the halting problem, we cannot guarantee that any specific Turing machine will halt, so the constructed program will not halt, and thus cannot actually compute a number.
> Almost all numbers cannot be practically expressed
That's certainly true, but all numbers that can be entered on a calculator can be expressed (for example, by the button sequence entered in the calculator). The calculator app can't help with the numbers that can't be practically expressed, it just needs to accurately approximate the ones that can.
This behaviour is what you get in say a cheap 1980s digital calculator, but it's not what we actually want. We want correct answers and to do that you need to be smarter. Ideally impossibly smart, but if the calculator is smarter than the person operating it that's a good start.
You're correct that the use of the calculator means we're talking about computable numbers, so that's nice - almost all Reals are non-computable but we ruled those out because we're using a calculator. However just because our results are Computable doesn't let us off the hook. There's a difference between knowing the answer is exactly 40 and knowing only that you've computed a few hundred decimal places after 40 and so far they're all zero, maybe the next one won't be.
> There's a difference between knowing the answer is exactly 40 and knowing only that you've computed a few hundred decimal places after 40 and so far they're all zero, maybe the next one won't be.
I would guess that if you pulled in a random sample of 2000 users of pocket calculators and surveyed their use cases you would find a grand total of 0 of them in which the cost function evaluated on a hundredth-decimal place error is at all meaningful.
In other words, no, that difference is not meaningful to a user of a pocket calculator.
The sine of x is 0 when x is any integer multiple of pi - it's not approximately zero, it's really zero, actual zero. So clearly some formulae can actually be zero. If I'm curious, I might wonder whether eventually the formula e * -x reaches zero for some x
e ** -10 is about 0.000045399929 and presumably you agree that's not zero
e ** -100 is about 3.72 times 10 ** -44, is that still not zero? An IEEE single precision floating point number has a non-zero representation for this, although it is a denormal meaning there's not very much precision left...
e ** -1000 is about 5.075 times 10 ** -435 and so it won't fit in the IEEE single or double precision types. So they both call this zero. Is it zero?
If you take the naive approach you've described, the answer apparently is yes, non-zero numbers are zero. Huh.
I'm not particularly worried that you would be unable to recognize patterns or rounding. Sorry you confused your hypothetical self.
And for the record, since we're talking about hundred digit numbers, as an IEEE float that would mean 23 exponent bits and you'd have to go below 10e-4000000 before it rounds to zero. Or 32 exponents bits if you follow some previous software implementations.
> And for the record, since we're talking about hundred digit numbers, as an IEEE float that would mean 23 exponent bits and you'd have to go below 10e-4000000 before it rounds to zero. Or 32 exponents bits if you follow some previous software implementations.
Um, no. Have you confused your not-at-all hypothetical self? Are you mistaking the significand, aka the mantissa for an exponent? The significand in a 32-bit "single precision" IEEE float is 23 bits (with an implied leading 1 bit for normal values)
When I wrote that example I of course tried this, since it so happens that I have been testing some conversions recently so...
>> (e -1000)
f32: 0
f64: 0
real ~5.07595889754945676529180947957434e-435
That's the first thing I said in this conversation. I did not ever suggest that single precision was enough. A hundred digits is beyond octuple precision. Octuple has 19 exponent bits, and in general every step up adds 4 more.
And going further up the comment chain, the original version was your mention of computing 40 followed by hundreds of digits of precision.
Does it matter that some numbers are inexpressible (i.e., cannot be computed)?
I don't think it matters on a practical level--it's not like the cure for cancer is embedded in an inexpressible number (because the cure to cancer has to be a computable number, otherwise, we couldn't actually cure cancer).
But does it matter from a theoretical/math perspective? Are there some theorems or proofs that we cannot access because of inexpressible numbers?
[Forgive my ignorance--I'm just a dumb programmer.]
Well some classical techniques in standard undergraduate real analysis could lead to numbers outside the set of computable numbers, so if you don't allow non-computable numbers you will need to be more careful in the theorems you derive in real analysis. I do not believe that is important however; it's much simpler to just work with the set of real numbers rather than the set of computable numbers.
We know of at least one uncomputable number - Chaitin's constant, the probability that any given Turing machine halts.
Personally, I do wonder sometimes if real-world physical processes can involve uncomputable numbers. Can an object be placed X units away from some point, where X is an uncomputable number? The implications would be really interesting, no matter whether the answer is yes or no.
> Almost all numbers cannot be practically expressed and it may even be that the probability of a random number being theoretically indescribable is about 100%. Depending on what a number is.
A common rebuke is that the construction of the 'real numbers' is so overwrought that most of them have no real claim to 'existing' at all.
That's pretty cool, but the downsides of switching to RRA are not only about user experience. When the result is 0.0000000..., the calculator cannot decide whether it's fine to compute the inverse of that number.
For instance, 1/(atan(1/5)-atan(1/239)-pi/4) outputs "Can't calculate".
Well alright, this is a division by zero. But then you can try 1/(atan(1/5)-atan(1/239)-pi/4+10^(-100000)), and the output is still "Can't calculate" even though it should really be 10^100000.
You missed a 4. You are trying to say 1/(4atan(1/5)-atan(1/239)-pi/4) is a division by zero.
On the other hand 1/(atan(1/5)-atan(1/239)-pi/4) is just -1.68866...
I played around with the calculator source code from the Android Open Source Project after a previous submission[1]. I think Google moved it from AOSP to the Google Play Services several years ago, but the old source is still available.
It does solve some real problems that I'd love to have available in a library. The discussion on the previous article links to some libraries, but my recollection is that the calculator code is more accessible to an innumerate person like myself.
Edit: the previous article under discussion doesn't seem to be available, but it's on archive.org[2].
The way this article talks about using "recursive real arithmetic" (RRA) reminds me of an excellent discussion with Conal Elliot on the Type Theory For All podcast. He talked about moving from representing things discretely to representing things continually (and therefore more accurately). For instance, before, people represented fonts as blocks of pixels, (discrete.) They were rough approximations of what the font really was. But then they started to be recognized as lines/vectors (continual), no matter the size, they represented exactly what a font was.
Conal gave a beautiful case for how comp sci should be about pursuing truth like that, and not just learning the latest commercial tool. I see the same dogged pursuit of true, accurate representation in this beatiful story.
Thanks, that's a lovely analogy and I'm excited to listen to that podcast.
I think the general idea of converting things from discrete and implementation-motivated representations to higher-level abstract descriptions (bitmaps to vectors, in your example) is great. It's actually something I'm very interested in, since the higher-level representations are usually much easier to do interesting transformations to. (Another example is going from meshes to SDFs for 3D models.)
I noticed this too, but I was confused because the calculator article was informative and interesting. It's entirely unlike the inept fluffy slop that gets posted to LinkedIn
Some quick research yields a couple of open source CAS, such as OpenAxiom, which uses the Modified BSD license. Granted that Google has strong "NIH" tendencies, but I'm curious why something like this wasn't adapted instead of paying several engineers some undisclosed amount of time to develop a calculation system.
The article mentions that a CAS is an order of magnitude (or more!) more complex than the bifurcated rational + RRA approach, as well as slower, but: the complexity would be solved by adapting an open source solution, and the computation speed wouldn't seem to matter on a device like an Android smartphone. My HP Prime in CAS mode runs at 400MHz and solves every problem the Android calculator solves with no perceptible delay.
Is it a matter of NIH? A legal issue with the 3-clause BSD license I don't understand? Reducing binary size? The available CAS weren't up to snuff for one reason or another? Some other technical issue? Or, if not that, why not use binary-coded decimal?
These are just questions, not criticisms. I have very very little experience in the problem domain and am curious about the answers :)
To make any type of app really good is super hard.
I have yet to see a good to-do list tool.
I'm not kidding. I tried TickTick, Notion, Workflowy ... everything I tried so far feels cumbersome compared to how I would like to handle my To-Do list. The way you create, edit, browse, drag+drop items is not as all as fluid as I imagine it.
So if anyone knows a good To-Do list software (must be web based, so I can use it anywhere without installing something) - let me know!
Why does a to-do list have to have any features by default? It could be a blank screen with a "settings" sign in the upper right, where you can enable just the features you need.
If I don't find such a software, I will write it myself. I actually already started:
It seems like you're looking for an outliner? Workflowy might fit your needs: https://workflowy.com/
Like others have said, the perfect to-do list is impossible because each person wants wildly different functionality.
My dream to-do list has minimal interaction, with the details handled like I have my own personal secretary. All I'd do is verbally say something like "remind me to do laundry later" and it would do the rest: Categorizing, organizing, prioritizing, scheduling and adding sub-tasks as needed.
I love the idea of automatic sub-tasks created at level which helps with your particular procrastination level. For example "do laundry" would add in "gather clothes, bring to laundry room, separate colors, add to washer, set timer, add to dryer, set timer, get clothes, fold clothes, put away, reschedule in a week (but hide until then). Maybe it's even add in Pomodoro timers to help.
LLMs with reasoning might get us there soon - we've been waiting for Knowledge Navigator like assistants for years.
This is the sort of thing I like trying to make llms do, thanks for the idea. I have a discord bot set up already that sends notifications and accepts notes; I will try adding some endpoints and burning some credits I have to see how hard it is to make AI talk to alarm endpoints in a smart way, etc
I'm one of the creators of Godspeed, which is a fast, 100% keyboard oriented to-do app (though we do support drag and drop as well!). And we've got a web app!
Just tried it, and this is very much the opposite of what I am looking for.
What I would like is a very minimal layout. Basically with nothing on the screen. And I want to be able to organize my world by dragging, dropping, swiping recursive items.
One issue I have with Trello is that it has multiple types of items. And that it is not recursive.
When I create an item "Supermarket" and then an item "Bread", I cannot drag and drop the item "Bread" into "Supermarket". But that is how I think. I have a lot of "items" and each item can contain other "items". I don't want any other type of object.
Another problem is that I cannot customize the layout. I can't remove every icon from the items in the list. I only want to see the item names, no other info like the icon that shows that there is a description or anything. But Trello seems to not support that.
I would love to have a To-Do app that is fluid for both one-off tasks and periodic checklists (daily/weekly/monthly/etc.) Most importantly, I want it to yell at me to actually do it. I was pretty surprised that basically nothing seems to fit the bill and even what existing "GTD" type apps could do felt cumbersome and limited.
There’s a pleasantly elegant “hey, we’ve solved the practical functional complement to this category of problems over here, so let’s just split the general actual user problem structurally” vibe to this journey.
It often pays off to revisit what the actual “why” is behind the work that you’re doing, and this story is a delightful example.
I wrote an arbitrary precision arithmetic C++ library back in the 90’s. We used it to compute key pairs for our then new elliptic-curve based software authentication/authorization system. I think the full cracks of the software were available in less than two weeks, but it was definitely a fun aside and waaaay too strong of a solution to a specific problem. I was young and stupid… now I’m old and stupid, so I’d just find an existing tool chain to solve the problem.
All the calculators that I just tried for the article's expression give the wrong answer (HP Prime, TI-36X Pro, some casio thing). Even google's own online calculator gives the wrong answer, which is mildly ironic. [https://www.google.com/search?q=1e101%2B1-1e101&oq=1e101%2B1]
I played around with the macOS calculator and discovered that the dividing line seems to be at 1e33. I.e. 1e33+1-1e33 gives the correct answer of 1 but 1e34+1-1e34 gives 0. Not sure what to make of that.
Tried with the HP Prime and it gave the precise 1 for the test. One need to put it in the CAS mode and use the exact form of 10^100 instead of 1E100. You shall get the right answer if the calculator is instructed to use its very powerful CAS engine.
I enjoyed the article, but it seems Apple has since improved their calculator app slightly. The first example is giving me the correct result today. However, the second example with the “Underflow” result is still occurring.
Oh no- I stand corrected. I tried it again and you are right. I had just woken up when I did my initial test and must have typoed something. I can no longer edit or delete my original comment :(
I remember hearing stories that for a time there was no engineer inside Apple responsible for the iOS Calculator.
Now it seems to be revived as there were some updates to it, but those also removed one of my favourite features -> tapping equals button no longer repeats the last operation.
They fortunately fixed the repeating feature in iOS 18.3. Though it does seem a bit ridiculous that something like this is tied to the entire OS version.
Find a representation of finite memory to represent points, which allows exact addition, multiplication and rotation between them, (with all the nice standard math property like associativity and commutativity).
For example your representation should be able to take a 2d point A, aka two coordinates, and rotate it around the origin by an angle theta to obtain the point B. Take the original point and rotate it by pi + theta, then reflect it around the origin to obtain the point C. Now answer the question whether B is coincident with C.
The point underlying the problem is about the closure of operations [1]
Typically one would like to be able to calculate things without making error, which accumulates.
The symbolic representation you suggest use a growing memory to represent the point by all the operations which have been applied to it since the origin.
What we would rather do is define a set of operation that are closed for a specific set of points, which allows to accumulate information by doing the computation rather than deferring the computation.
One could for example think of using fixed point number to represent the coordinates, and define an extra point at the infinity to handle overflow. And then you have some property that you like and some that you like less. For example minimums distance which can define a point uniquely in continuous R^2, are no longer unique when you constrain yourself to integer grids by using fixed points.
Or you could use some rational numbers to store the coordinates like in CGAL (which allows you to know on which sides of the planes you are without z-fighting), but they still require growing memory. You can maybe add some rule to handle the underflow and overflows.
Or you can merge close points, but maybe you lose some information.
Or you can define the operations on lattices, finite automaton, or do some error correcting codes, dynamic recombining graphs (aka the ruliad).
years ago the daily wtf had a challenge for writing the worst calculator app. my submission maintained calculation state by emitting it's own source code, recompiling and running the new executable.
I first learned to program on a Wang 2200 computer with 8KB of RAM, back in 1978. One of the math teachers stayed an hour late most days to allow us nerds to come in an use the two computers. There were more people than computers, so often you'd only get 10 or 15 minutes of time.
Anyway, I wrote a program where you could enter an equation and it would draw an ASCII graph of the curve. I didn't know how to parse expressions and even if I had I knew it would be slow. The machine had a cassette tape under computer control for storing and loading programs. What I did was to take the expression typed by the user and convert each one into its tokenized form and write it out to tape. The program would then load that just created overlay which contained something like "1000 DEF FNY(X)=X^2-5" and a FOR loop would sweep X over the designated range, and have "LET Y=FNY(X)" to evaluate the expression for me.
As a result, after entering the equation, it would take about five seconds to write out the overlay, rewind a couple blocks, and load the overlay before it would start to plot. But once it started it went pretty fast.
Check out wang2200.org if you don't know about it. There is an emulator that runs on windows and osx, lots of scanned documents, many disk images, and some technical details on the microarchitecture of the various 2200 CPUs (they didn't use a microprocessor -- they are all boards and boards of TTL components, until they finally but everything on a single ASIC in the 80s).
Interesting article, and kudos to Boehm for going the extra mile(s), but it seems like overkill to me.
I wouldn't expect, or use, a calculator for any calculation requiring more accuracy than the number of digits it can display. I'm OK with with iPhone's 10^100 + 1 = 1e100.
If I really needed something better, I'd try Wolfram Alpha.
The thing about this calculator app is that it can display any number of digits just by scrolling the display field. The UX is "any number of digits the user wants" not some predetermined fixed number of digits.
As a developer, "infinite scroll to get more digits" sounds really cool. It sounds conceptually similar to lazily-evaluated sequences in languages like Clojure and Haskell (where you can have a 'virtually-infinite' list or array -- basically a function -- and can access arbitrarily large indices).
As a user, it sounds like an annoying interface. On the rare case I want to compute e^(-10000), I do not want to scroll for 3 minutes through screens filled with 0s to find the significant digits.
Furthermore, it's not very usable. A key question in this scenario would be: how many zeroes were there?
It's basically impossible to tell with this UI. A better approach is simply to switch to scientific notation for very large or very small numbers, and leave decimal expansion as an extra option for users who need it. (Roughly similar to what Wolfram Alpha gives you for certain expressions.)
One of the first ideas I had for an app was a calculator that represented digits like shown in the article but allowed you to write them with variables and toggle between symbolic and actual responses.
A use case would be: in a spreadsheet like interface you could verify if the operations produced the final equation you were modeling in order to help validate if the number was correct or not. I had a TI-89 that could do something close and even in 2006 that was not exactly brand new tech. I figured surely some open source library available on the desktop must get me close. I was wildly wrong. I stuck with programming but abandoned the calculator idea. Even nearly 20 years later, such a task doesn’t seem that much easier to me.
That's a CAS, as mentioned. There are plenty of open source libraries available, but one that specifically implements the algorithms discussed in this article is flintlib. Here's an example from their docs showing exactly what you want:
https://flintlib.org/doc/examples_calcium.html#examples-calc...
At the risk of coming across as being a spoilsport, I think when someone says "anyone can write a calculator app", they just mean an app that simulates a pocket calculator (which is indeed pretty easy) as opposed to one which always gives precisely the right answer (which is indeed impossible). Also, you can avoid the most embarrassing errors just by rearranging the terms to do cancellation where possible, e.g. sqrt(2) * 3 * sqrt(2) is absolutely precisely 6, not 6 within some degree of approximation.
> as opposed to one which always gives precisely the right answer (which is indeed impossible)
Per the article, it's completely possible. Frankly I'd say they found the obvious solution, the one that any decent programmer would find for that problem.
It... really doesn't seem like a lot of effort and thought. I feel like anyone who's implemented a command algebra for anything is already halfway there.
> 1 is not equal to 1 - e^(-e^1000). But for Richardson and Fitch's algorithm to detect that, it would require more steps than there are atoms in the universe.
> They needed something faster.
I'm disappointed after this paragraph I expected a better algorithm and instead they decided to give up. Fredrik Johansson in his paper "Calcium: computing in exact real and complex fields" gives a partial algorithm for the problem and writes "Algorithm 2 is inspired by Richardson’s algorithm, but incomplete: it will
find logarithmic and exponential relations, but only if the extension tower is flattened (in
other words, we must avoid extensions such as e^log(z) or √z^2), and it does not handle all
algebraic functions.
Much like the Risch algorithm, Richardson’s algorithm has apparently never been implemented fully. We presume that Mathematica and Maple use similar heuristics to ours,
but the details are not documented [6], and we do not know to what extent True/False
answers are backed up by a rigorous certification in those system".
I use python repl as my primary calculator on my computer.
1. I don't have problems like the IOS problem documented here. This requires me to know the difference between an int and a float, but pythons ints have unbounded precision(except if you overflow your entire memory), so that kind of precision loss isn't a big deal.
2. History is a lot better. Being able to scroll back seems like a thing calculators ought to offer you, but they don't.
3. In the 1-in-a-hundred times I need to repeat operations on the calculator, hey, we've already got loops, this is python
4. Every math feature in the windows default calculator is available in the math library.
5. As bad as python's performance reputation is, it's not at all going to be noticeable for simple math.
I was always a little envious of the people that could use bc because they knew how. I know python and its installed on linuxes by default, so now I am no longer envious.
> Obviously we'll want a symbolic representation for the real number 1
Sorry, why is this obvious? A basic int type can store the value of 1, let alone the more complicated Rational (BigNum/BigNum) type they have. I can absolutely see why you want symbolic representations for pi, e, i, trig functions, etc., but why one?!
I think the issue was that they are representing a real as a product of a rational and that more complicated type, so without a symbolic representation for 1, when representing and rational, they would have to multiply it by a RRA representation of 1 which brings in all the decision problem issues.
Sorry for being unclear about this. A number is being expressed as a rational times a real. In the case where the rational is exactly the number we want, we want to be able to set the real to 1, so the multiplication has no effect
Because they express numbers as rational times a real, so the real in all those cases would be one. When it’s one, you do rational math as normal without involving reals.
Off topic, but I believe naming this specific kind of numbers "real" is a misnomer. Nothing in reality is expression of a real number. Real numbers pop up only when we abstract reality into mathematical models.
In Polish language rational numbers are called something more like "measurable" numbers and in my opinion that's the last kind of numbers that is expressed in reality in any way. Those should be called "real" and real should be called something like "abstract" or "limiting" because they pop-up first as limits of some process working on rational numbers for infinite number of steps.
I really hate when people put cat images and memes in a serious article.
Don't get me wrong, the content is good and informative. But I just hate the format.
That reminds me when SideFX started putting memes into their official tutorial youtube channel. At least this is just a webpage and we can scroll through them...
While we're already breaking the HN guidelines—"Please don't complain about tangential annoyances—e.g. article or website formats"—let me just say that the scrolljacking on this article is awful.
I've not intentionally implemented any scrolljacking (I'm using the default obsidian template), but I'm curious what you mean as I also don't see where the scrolljacking would happen. Could you elaborate on the way in which the user experience is awful now, so I can improve it?
What browser are you using? Can you describe the issue? Typically scroll jacking is when you hook on scroll to forcefully scroll the page to something, but that's not happening here.
The tone of the article has given away the fact that the article is not serious. At least not the way it's presented. You want something serious? Go read the pdf.
And I don't mind at all. Without this article, I probably will never know what's in the paper and how they iterated. I'll likely give up after reading the abstract -- "oh, they solved a problem". But this article actually makes much more motivating to read the original paper, which I plan to do now.
I'm happy to have spread the good word! Note that when you read the paper, some implementation details are slightly different than my description. For instance, they always store the recursive real form of the real part of each number, even when the symbolic part perfectly describes it. I removed this redundancy to try to simplify it for twitter, but I hope it doesn't confuse those who go on to read the paper afterwards.
I think I understand why, from the article, but wouldn't it be "easy" (probably not, but curious about why) to simplify the first expression to (1-1)π + 1 then 0π + 1 and finally just 1 before calculating a result?
I haven’t really used the iPad’s calculator app, but it looks exactly like a larger version of the iPhone app. So I don’t think there are any technical reasons why it took so long for the iPad to get that app.
Due to backwards compatibility modern PC CPUs have some mathematical constants in hardware, one of them Pi https://www.felixcloutier.com/x86/fld1:fldl2t:fldl2e:fldpi:f... Moreover, that FLDPI instruction delivers 80 bits of precision, i.e. more precise than FP64.
That’s pretty much useless in modern world because the whole x87 FPU is deprecated. Modern compilers are generating SSE1 and SSE2 instructions for floating-point arithmetic, instead of x87.
As far as I know, windows calculator have a similar approach. It use rational, and switch to Taylor expansion to try to avoid cancellation errors. Microsoft open sourced it some times ago on GitHub
lowkey this is why ieee 754 floating point is both a blessing and a curse, like yeah it’s fast n standardized but also introduces unavoidable precision loss, esp w iterative computations where rounding errors stack up in unpredictable ways. ppl act like increasing precision bits solves everything. but u just push the problem further down, still dealing w truncation, cancellation, etc. (and edge cases where numerical stability breaks down.)
… and this is why interval arithmetic and arbitrary precision methods exist, so it gives guaranteed bounds on error instead of just hoping fp rounding doesn’t mess things up too bad. but obv those come w their own overhead: interval methods can be overly conservative, which leads to unnecessary precision loss, and arbitrary precision is computationally expensive, scaling non-linearly w operand size.
wonder if hybrid approaches could be the move, like symbolic preprocessing to maintain exact forms where possible, then constrained numerical evaluation only when necessary. could optimize tradeoffs dynamically. so we’d keep things efficient while minimizing precision loss in critical operations. esp useful in contexts where precision requirements shift in real time. might even be interesting to explore adaptive precision techniques (where computations start at lower precision but refine iteratively based on error estimates).
This article was really well written. Usually in such articles i understand about 50%, maybe if I'm lucky 70% but this one I've understood nearly everything. It's not much of a smartness thing but an absolute refusal on my part to learn the jargon of programming as well as my severe lack of knowledge of all the big words that are thrown around lol. But really simply written love it
If you accept that Pi and Sqrt(2) will be represented as a terminating series of digits (say, 30), then 99% of the problems stated go away. My HP calculator doesn't represent the square root of 2 as a magic number, it's 1.414213562.
At some point, when I get a spare 5 years (and/or if people start paying for software again), I will start to work on a calculator application. Number system wrangling is quite fun and challenging, and I am hoping to incorporate units as a first-class citizen.
This is really cool, but it does show how Google works. They’ll pay this guy ~$3million a year (assuming stock appreciation) to do this but almost no end user will appreciate it in the calculator app itself.
Does anyone know if this was the system used by higher end TI calculators like the TI-92? It had a 'rational' mode for exact answers and I suspect that it used RRA for that.
The TI-92 and similar have a full-on computer algebra system that they use when they're in exact mode [1]. It does symbolic manipulation.
This is different from what the post (and linked paper) discuss, where the result will degrade to recursive real arithmetic, which is correct but only to a bounded level of precision. A CAS will always give a fully-exact (although sometimes very unwieldy) answer.
I doubt that most people using the calc app expect it to handle such situations. It's nice that it does of course but IMO it misses the point that the inputs to a lot of real world calculations are inaccurate to start with.
i.e it's more likely that I've made a few mm mistake when measuring the radius of my table than that I'm not using an precise enough version of Pi. The area of the table will have more error because one is squaring the radius, obviously.
It would be interesting to have a calculator that let you add in your estimated measurement error (or made a few reasonable guesses about it for you) and told you the error in your result e.g. the standard deviation.
I sometimes want to buy stuff at a hardware shop and I think : "how much paint do I need to buy?" I haven't planned so I'm thinking "it's about 4m by 5m...I think?" I try to do a couple of calculations with worst case numbers so I at least get enough paint and save another trip to the shop but not comically too much so that I have a tin of it for the next 5 years.
I remember having to estimate error in results that were calculated from measured values for physics 101 and it was a pain.
Everything in life is estimation. A calculator that tells you the perfect answer in some highly unusual situation probably isn't fixing the most common source of error.
e.g I measure an angle and I am not sure about whether it's 45 degrees or 46 then an answer like this is pointless:
0.7071067811865476
cos of 46 (if I've converted properly to radians) is
0.6946583704589973
so my error is about 0.01 and those long lists of digits imply a precision I don't have.
I think it would be more useful for most people to tell them how much error there is in their results after guessing or letting them assign the estimated error in their inputs.
Examples include finance and resource accounting. Mathematical proof (yes sometimes they involve numbers), etc.
Even in engineering and carpentry it’s not true. The design process is idealization, without real world measurements. It’s conceptually useful for precise numbers to sum properly on paper. For example it’s common to divide lengths into fractional amounts which are expected to sum to a whole.
> tell them how much error there is
But once again, we know how to build calculators that do most calculations with 0 error. So why are we planning for an estimation problem we don’t have?
Read the article. yes if you want to put out sqrt(2) in decimal form, it will be an approximate. But you can present it as sqrt(2).
>> tell them how much error there is
>But once again, we know how to build calculators that do most calculations with 0 error. So why are we planning for an estimation problem we don’t have?
We have accepted lack of perfection from calculators long ago. I cannot think of a use-case which needs it from anyone I know. Perhaps some limited number of people out there really need a calculator that can do these things but I suspect that if they do there's a great chance they don't know it can handle that sort of issue.
I have more trouble with the positions of the buttons in the UI than with sums that don't work out as expected. The effort to get buttons right seems far less to me.
I can think of useful things I'd like when I'm doing real-world work which this feature doesn't address at all and I wonder why such an emphasis was put on something which isn't really that transformational.
I understand that if you use American units you might be calculating things in fractions of an inch but since I've never had to use those units it's never been necessary to do that sort of calculation. I suppose if that helps someone then yay but I can only sympathise to an extent.
Where I have problems is with things that aren't precise - where the bit of wood that I cut turns out a millimetre too short and ends up being useless.
I really do think we should just use the symbolic systems of math rather than trying to bring natural world numbers into a digital number space. It's this mapping that inherently leads to compensating strategies. I guess this is called an algebraic system like the author mentioned.
But I view math as more of a string manipulation function with position-dependent mapping behavior per character and dependency graphs, combined with several special functions that form the universal constants.
Just because data is stored in digitization as 1 and O, don't forget it's more like charged and not charged. Computers are not numeric systems, they are binary systems. Not the same thing.
I really wonder what the business case for spending so much effort on such precision was. Who are the users who need such accuracy but are using android calculator?
Students learning about real numbers. Yes seriously.
Unlike software engineers who have already studied IEEE754 numbers, you can't expect a middle school student to know concepts like catastrophic cancellation. But a middle school student wants to poke around with trigonometric functions and pi to study their properties, but a true computer algebra system might not be available to them. They might not understand that a random calculator app doesn't behave correctly because it's not using the same kind of numbers discussed in their math class.
while phones are mostly a circus people do try to use them for serious things. For a program you make the calculations as accurate as the application requires. If you don't know what a tool will be used for you never really get to feel satisfied.
Really interesting article. I noticed that my Android calculator app could display irrational numbers like PI to an impressive amount of digits, if I hold it sideways.
Are you in standard or scientific? Each new operator (Not sure if thats the correct term) is calculated immediately. ie 1+2x3 is worked out as 1+2 (Stored into buffer as 3) x 3 = 9
But scientific does it correctly where it just appends the new expression onto the buffer instead of applying it
I'm on windows 11. I just did it and it replied "7". I subtracted 7 to see if there was some epsilon error but it reported "0". What do you experience?
There should have been an "x" on the right of the "contact me" portion that you could click to make it go away. Sounds like it didn't show up for you, so sorry about that. Unfortunately I don't have an iPhone SE to test against and the "x" does seem to show up on the iPhone SE screen-size simulator in Chrome. This means I don't know how to reproduce the issue and probably won't be able to resolve it without removing the "contact me" page entirely, which I'm not willing to do right now.
So, 'bc' just has the (big) rationals. Rationals are the numbers you could make by taking one integer (say 5 or minus sixteen trillion and fifty-one) and dividing it by some positive integer (such as three or sixty-two thousand)
If we have a "Big Integer" type which can represent arbitrarily huge integers, such 10 to the power 5000, we can use two of these to make a Big Rational, and so that's what bc has.
But the rationals aren't enough for all the features on your calculator. What's the square root of ten ? How about the square root of 40 ? Now, multiply those together. The correct answer is 20. Not 20.00000000000000001 but exactly 20.
I actually use bc a lot and the fact it's just the big rationals was annoying which is why I set off on the route that ended with my crate `realistic`
Amusingly one of the things I liked in bc was that I could write stuff like sqrt(10) * sqrt(40) and it works -- but even the more-bc-like command line toy for my own use doesn't do this, turns out a few months of writing the guts of a computable reals implementation makes (* (sqrt 10) (sqrt 40)) seem like a completely reasonable way to write what I meant and so "Make it work like bc" faded from "Important" to "Eh, whatever I'll get to it later".
If you'd asked me a year ago if "fix edge case bugs in converting realistic::Real to f64" would happen before "Have natural expressions like 1 + 2 * 3 do what is expected" I'd have said not a chance, but shows how much I knew.
No? They made a goal to show 0.0000 in as few places as possible, and they got as close to it as they could without compromising their other requirements.
Was given the task to build a simple calculator app as a project for a Java class I took in college.
No parens or anything like that, nothing nearly so fancy. Classic desk calculator where you set the infix operation to apply to the previous value, followed by the second value of the operation.
It was frankly an unexpected challenge. There's a lot more to it than meets the eye.
I only got as far as rational numbers though. PI accurate to the 8 digit display was good enough for me.
Honestly though, I think it was a great exercise for students, showing how seemingly simple tasks can actually be more complex than they seem. I'm still here thinking about it some twenty years later.
> We no longer receive bug reports about inaccurate results, as we occasionally did for the 2014 floating-point-based calculator
(with a footnote: This excludes reports from one or two bugs that have now been fixed for many months. Unfortunately, we continue to receive complaints about
incorrect results, mostly for two reasons. Users often do not understand the
difference between degrees and radians. Second, there is no standard way
to parse calculator expressions. 1 + 10% is 0.11. 10% is 0.1. What’s 10% + 10%?)
When you have 3 billion users, I can imagine that getting rid of bugs that only affect 0.001% of your userbase is still worthwhile and probably pays for itself in reduced support costs.
I’m confused. Why would 1 + 10% obviously be 0.11?
I expected 1.1 (which is what my iOS calculator reported, when I got curious).
I do understand the question of parsing. I just struggle to understand why the first one is confidently stated to correctly result in a particular answer. It feels like a perfect example itself of a problem with unclear parsing.
I know adding % has multiple conventions, but this one seems odd, I'd interpret 1 + 10% as "one plus 10 percent of one" which is 1.1, or as 1 + 10 / 100 which happens to be also 1.1 here
The only interpretation that'd make it 0.11 is if it represents 1% + 10%, but then the question of 10% + 10% is answered: 0.2 or 20%. Or maybe there's a typo and it was supposed to say "0.1 + 10%"
I think a big issue with how we teach math, is the casualness with which we introduce children to floating points.
Its like: Hey little Bobby, now that you can count here are the ints and multiplication/division. For the rest of your life there will be things to learn about them and their algebra.
Tomorrow we'll learn how to put a ".25" behind it. Nothing serious. Just adds multiple different types of infinities with profound impact on exactness and computability, which you have yet to learn about. But it lets you write 1/4 without a fraction which means its simple!
Totally agree. It bothered me when I was younger, though I had no idea how to explain why, but this should be deeply unsettling to everyone who encounters it:
That does appear to equal exactly 5... would you care to show how it doesn't?
$ cat check_math.c
#include <stdio.h>
int main() {
// Define the values as float (32-bit floating point)
float one_third = 1.0f / 3.0f;
float five = 5.0f;
// Compute the equation
float result = one_third + five - one_third;
// Check for exact equality
if (result == five) {
printf("The equation evaluates EXACTLY to 5.0 (True)\n");
} else {
// Print the actual result and the difference
printf("The equation does NOT evaluate exactly to 5.0 (False)\n");
printf("Computed result: %.10f\n", result);
printf("Difference: %.10f\n", result - five);
}
return 0;
}
$ gcc -O0 check_math.c -o check_math; ./check_math
The equation evaluates EXACTLY to 5.0 (True)
I don't care if it gives me "Underflow" for bs like e^-1000, just give me a text field that will be calculated into result that's represented in the way I want (sci notation, hex, binary, ascii etc whatever).
All standard calculators are imitations of a desktop calculator, It's insane that we're still dragging this UI into desktop. Why don't we use rotary dial on mobile phones then?
It's great that at least OSX have cmd+space where I can type an expression and get a quick result.
And yes, I did develop my own calculator, and happily used it for many years.
TLDR: the real problem of calculators is their UI, not arithmetic core.
Writing a CAS from scratch would've been much more complicated.
Reusing an existing one? Maybe not.
Yes, it would likely be slower, but is a 1ms vs. 10ms response time in the calculator app really such a big deal? entering a correct calculation / formula on the smartphone likely takes much longer.
Slightly disappointing: The calculator embedded in Google's search page also gives the wrong answer (0) for (10^100) + 1 − (10^100). So apparently they don't use the insights they gained from their Android calculator.
And yet Android's calculator is quite bad. Despite being able to correctly calculate stuff that 99.99% of the population don't care about, it lacks many scientific operations that a good chunk of accountants, engineers and coders would make use of regularly. This is a classic situation of engineers solving the fun/challenging problems before the customer's actual problems.
I removed telemetry on my Win10 system and now calc.exe crashes on basic calculations. I've reported this but nobody cares because the next step in troubleshooting is to reinstall Windows. So if telemetry fails, calc.exe will silently explode. Therefore no, anyone cannot make it.
I don't see how one can expect them to take a report worded this way seriously. Perhaps if they actually reported the crash without the tantrum the team would fix it.
Windows XP's mspaint.exe stopped working at some point :(. I was also in the team "simple tool worked as I want it to" for as long as that lasted. (I don't use Windows anymore, not for only this reason obviously but still, I don't seem to have these problems anymore where you can't make things work a certain way.)
The point of the article is to show building a calculator requires a CAS, which should have been obvious to anyone with a basic understanding of how a calculator works.
The premise of the article is itself somewhat bogus, but I suppose there are programmers today who never had to work with a graphing calculator.
While RRA is an interesting approach, ultimately it wasn't sufficient.
Re-using an off-the-shelf CAS would have been the more practical solution, avoiding all the extra R&D on a novel number representation that wasn't quite sufficient to do the job.
lol I ran into this when making a calculator program because Google's calculator didn't do certain operations (such as adding clock time results like 1:23+1:54) and also because Google occasionally accuses me of being a bot when I search for too many equations.
Maybe I'll get back to the project and finish it this year.
Correct, but my goal was just to get the same result than JS `eval()`except for -n * m because in my opinion this shouln't require parenthesis. It's still a good learn to do this, I don't want to deal with floating points things etc..
Yes, if you use limited-precision data types. But you have it the wrong way, if you first cancel out the $BIGNUM (ie. reorder to $BIGNUM - $BIGNUM + 1) the answer is 1; if you first evaluate $BIGNUM+1, the answer is 0 because $BIGNUM+1 has no representation distinct from $BIGNUM. Limited-precision arithmetic is not, in general, associative. Still arithmetic, though, just not in the ring of integers. But the whole point of the article was that it's, of course, possible to do better and get exact results.
no, because only in our imaginations and in no place in the universe can we ignore significance of measurements. If we are sending a spaceship to an interstellar object 1 light year away from earth, and the spaceship is currently 25 miles from earth (on the way), you are insisting that you know more about the distance from earth to the object than you do if you think that that distance from the spaceship to the galaxy is 587862819274.1 miles
Interesting article but that feels like wasted effort for what is probably the most bare-bones calculator app out there. The Android calc app has the 4 operations, sin cos tan ^ ln log √ ! And that's it. I think most people serious about calculator usage either have a physical one or use another more featureful app and the others don't need such precision.
It's not wasted effort at all, as this app comes installed by default for over a billion users. Only a tiny fraction will ever install another calculator app, so the default one better work entirely correctly. When you have that many users it's hard to waste effort on making the product better.
In the year he did this he easily could have just done some minor interface tweaks to a ruby repl which includes the BigDecimal library. In fact I bet this post to an AI could result in such a numerically accurate calculator app. maybe as a Sinatra single file ruby web app designed to format to phone resolutions natively.
Nice story. An even more powerful way to express numbers is as a continued fraction (https://en.wikipedia.org/wiki/Continued_fraction). You can express both real and rational numbers efficiently using a continued fraction representation.
As a fun fact, I have a not-that-old math textbook (from a famous number theorist) that says that it is most likely that algorithms for adding/multiplying continued fractions do not exist. Then in 1972 Bill Gosper came along and proved that (in his own words) "Continued fractions are not only perfectly amenable to arithmetic, they are amenable to perfect arithmetic.", see https://perl.plover.com/yak/cftalk/INFO/gosper.txt.
I have been working on a Python library called reals (https://github.com/rubenvannieuwpoort/reals). The idea is that you should be able to use it as a drop-in replacement for the Decimal or Fraction type, and it should "just work" (it's very much a work-in-progress, though). It works by using the techniques described by Bill Gosper to manipulate continued fractions. I ran into the problems described on this page, and a lot more. Fun times.
> You can express both real and rational numbers efficiently using a continued fraction representation.
No, all finite continued fractions express a rational number (for... obvious reasons), which is honestly kind of a disappointment, since arbitrary sequences of integers can, as a matter of principle, represent arbitrary computable numbers if you want them to. They're powerful than finite positional representations, but fundamentally equivalent to simple fractions.
They are occasionally convenient for certain problem structures but, as I'm sure you've already discovered, somewhat less convenient for a wide range of common problems.
> No, all finite continued fractions express a rational number
Any real number x has an infinite continued fraction representation. By efficient I mean that the information of the continued fraction coefficients is an efficient way to compute rational upper and lower bounds that approximate x well (they are the best rational approximations to x).
> They are occasionally convenient for certain problem structures but, as I'm sure you've already discovered, somewhat less convenient for a wide range of common problems.
I'm curious what you mean exactly. I've found them to be very convenient for evaluating arithmetic expressions (involving both rational and irrational numbers) to fairly high accuracy. They are not the most efficient solution for this, but their simplicity and not having to do error analysis is far better than any other purely numerical system.
> fundamentally equivalent to simple fractions.
This feels like it is a bit too reductionist. I can come up with a lot of example, but it's quite hard to find the best rational approximations of a number with just fractions, while it's trivial with continued fractions. Likewise, a number like the golden ratio, e, or any algebraic number has a simple description in terms of continued fractions, while this is certainly not the case for normal fractions.
That continued fractions can be easily converted to normal fractions and vice versa, is a strength of continued fractions, not a weakness.
Fractions do pose a non-trivial issue when they have to be converted to decimal representations. So that is indeed a weakness, although not a direct one. (You can argue the same for big decimals with a binary mantissa, for example.)
As to my understanding continued fractions can represent any number to as many decimal points as you need. So if you need π you can just calculate 2 decimal points and write 3.14 if you want to calculate π*10^9 you can calculate i.e. 11 digits and write 3141592653.58 I think this is what OP means and I am not sure why you do not agree.
But here continued fractions are used to progressively generate approximations to the true real number. So you have no control over denominator and as you mentioned repeated division is necessary for most numbers. In comparison, digit generation approach can be tailored to the output radix (typically 10). Division still does likely happen, but only in the approximation routine itself and thus can be made more efficient.
I agree though the article is about calculator app and user typically won't care if this is 10ns or 100ms to gen an output - it would look like an instant response anyway.
The linked paper by Bill Gosper is about potentially infinite continued fractions with potentially irrational symbolic terms.
> finite
That's the issue, no? If you go infinite you can then express any real number. You can then actually represent all those whose sequence is equivalent to a computable function.
You are describing something that is practically more like a computer algebra system than a number system. To go infinite without infinite storage, you need to store the information required to compute the trailing digits of the number. That is possible with things like pi, which have recursive formulas to compute, but it's not easy for arbitrary numbers.
> That is possible with things like pi, which have recursive formulas to compute, but it's not easy for arbitrary numbers.
It is possible for pretty much all the numbers you could care about. I'm not claiming it is possible for all real numbers though (notice my wording with "express" and "represent"). In fact since this creates an equivalence between real numbers and functions on natural numbers, and not all functions are computable, it follows that some real numbers are not representable because they correspond to non-computable functions. Those that are representable are instead called computable numbers.
How would you get those numbers into the computer anyway? It seems like this would be a practical system to deal with numbers that can be represented exactly in that way, and numbers you can get at from there.
The way every other weird number gets into a computer: through math operations. For example, sqrt(7) is irrational. If you subtract something very close to sqrt(7) from it, then you need to keep making digits.
Continued fractions are very cool. I saw in a CTF competition once a question about breaking an RSA variant that relied on the fact that a certain ratio was a term in sequence of continued fraction convergents.
Naturally the person pursing a PhD in number theory (whom I recruited to our team for specifically this reason) was unable to solve the problem and we finished in third place.
Sounds a bit like https://en.wikipedia.org/wiki/Wiener%27s_attack.
(It's not a good article when it comes to the attack details, unfortunately.)
I gave a presentation on that attack in a number theory class - it made not getting that problem sting a little more.
Why unnecessarily air this grievance in a public forum. If this person reads it they will be unhappy and I'm sure they have already suffered enough from this failure.
Oh I don’t think of it like that - it was not a super serious competition and aside from some lighthearted ribbing there was certainly no suffering from any failure.
Fair enough
What do you mean by "[n]aturally" here?
It's used with sarcasm / irony. In this use case, "naturally" implies the author intended to communicate one or more emotions from a certain narrow set of possibilities. That set includes:
- An eye-rolling, critical emotion - where they used up a valuable spot on the team to retain a person who ostensibly promises to specialize in exactly this type of problem, but instead they proved to be useless even in the one area they were supposed to deliver value.
- A emotion similar to that invoked by "c'est la vie". Sometimes this is resigned, sometimes this is playful, sometimes this is simply neutrally accepting reality.
Follow-up comments from the person who wrote it indicate they meant it in a playful sense of "c'est la vie", and indicated that the team found camaraderie and joy in teasing each other about it.
Sorry if this sounds a little bit like ChatGPT - I wrote it myself but at the point when one is explaining this kind of thing, it's difficult to not write like an alien or a robot.
It was an ironic twist of fate that we were preparing specifically for this type of challenge and, when presented with exactly what we had prepared for we failed to see the solution.
Why not just say "ironically"?
I think the other comment had an excellent breakdown of the various factors at play, so I will start by saying I fully endorse what was said there.
To highlight a key point: “naturally” is slightly humorous because it implies that while the outcome was ironic, it should almost be expected that an ironic bad thing happens. In addition, it signals my opinion on such situations more generally, whereas “ironically” is a more straightforward description of what happened that would add less humor and signal less of my personality.
I have been working on a new definition of real numbers which I think is a better foundation for real numbers and seems to be a theoretical version of what you are doing practically. I am currently calling them rational betweenness relations. Namely, it is the set of all rational intervals that contain the real number. Since this is circular, it is really about properties that a family of intervals must satisfy. Since real numbers are messy, this idealized form is supplemented with a fuzzy procedure for figuring out whether an interval contains the number or not. The work is hosted at (https://github.com/jostylr/Reals-as-Oracles) with the first paper in the readme being the most recent version of this idea.
The older and longer paper of Defining Real Numbers as Oracles contains some exploration of these ideas in terms of continued fractions. In section 6, I explore the use of mediants to compute continued fractions, as inspired by the old paper Continued Fractions without Tears ( https://www.jstor.org/stable/2689627 ). I also explore a bit of Bill Gosper's arithmetic in Section 7.9.2. In there, I square the square root of 2 and the procedure, as far as I can tell, never settles down to give a result as you seem to indicate in another comment.
For fun, I am hoping to implement a version of some of these ideas in Julia at some point. I am glad to see a version in Python and I will no doubt draw inspiration from it and look forward to using it as a check on my work.
That sounds kind of similar to Dedikin cuts but crossed with ordered sequence < and > the real? Cool web site.
It is equivalent to Dedekind cuts as one of my papers shows. You can think of Dedekind cuts as collecting all the lower bounds of the intervals and throwing away the upper bounds. But if you think about flushing out a Dedekind cut to be useful, it is about pairing with an upper bound. For example, if I say that 1 and 1.1 and 1.2 are in the Dedekind cut, then I know the real number is above 1.2. But it could be any number above 1.2. What I also need to know is, say, that 1.5 is not in the cut. Then the real number is between 1.2 and 1.5. But this is really just a slightly roundabout way of talking about an interval that contains the real number.
Similarly with decimals and Cauchy sequences, what is lurking around to make those useful is an interval. If I tell you the sequence consists of a trillion approximations to pi, to within 10^-20 precision, but I do not tell you anything about the tail of the sequence, then one has no information. The next term could easily be -10000. It is having that criterion about all the rest of the terms being within epsilon that matters and that, fundamentally, is an interval notion.
How do you work out an answer for x - y when eg x = sqrt(2) and y = sqrt(2) - epsilon for arbitrarily small epsilon? How do you differentiate that from x - x?
In a purely numerical setting, you can only distinguish these two cases when you evaluate the expression with enough accuracy. This may feel like a weakness, but if you think about this it is a much more "honest" way of handling inaccuracy than just rounding like you would do with floating point arithmetic.
A good way to think about the framework, is that for any expression you can compute a rational lower and upper bound for the "true" real solution. With enough computation you can get them arbitrarily close, but when an intermediate result is not rational, you will never be able to compute the true solution (even if it happens to be rational; a good example is that for sqrt(2) * sqrt(2) you will only be able to get a solution of the form 2 ± ϵ for some arbitrarily small ϵ).
> you will only be able to get a solution of the form 2 ± ϵ for some arbitrarily small ϵ
The problem with that from a UX perspective is that you won't even get to write out the first digit of the solution because you can never decide whether it should be 1.999...999something (which truncates to 1.99) or 2.000...000something (which truncates to 2.00). This is a well-known peculiarity of "exact" real computation and is basically one especially relevant case of the 'Table-maker's dilemma' https://en.wikipedia.org/wiki/Rounding#Table-maker%27s_dilem...
If one embraces rational intervals throughout, they can be the computational foundation and the ux could have the option of displaying the interval for the complete truth or, to gain an intuitive sense, pick a number in the interval to display, such as the median or mediant. Presumably this would be a a user choice in any given context.
is that how that kid calculated super long pi?
The link at the end is both shortened (for tracking purposes?) and unclickable… so that’s unfortunate. Here is the real link to the paper, in a clickable format: https://dl.acm.org/doi/pdf/10.1145/3385412.3386037
Thanks for pointing that out. It should be fixed now. The shortening was done by the editor I was using ("Buffer") to draft the tweets in - I wasn't intending to track one but it probably does provide some means of seeing how many people clicked the link
Unrelated to the article, but this reminds me of being an intrepid but naive 12-year-old trying to learn programming. I had already taught myself a bit using books, including following a tutorial to make a simple calculator complete with a GUI in C++. However I wasn't sure how to improve further without help, so my mom found me an IT school.
The sales lady gave us a hard sell on their "complete package" which had basic C programming but also included a bunch of unnecessary topics like Microsoft Excel, etc. When I tried to ask if I could skip all that and just skip to more advanced programming topics, she was adamant that this wasn't an option; she downplayed my achievements trying to say I basically knew nothing and needed to start from the beginning.
Most of all, I recall her saying something like "So what, you made a calculator? That's so simple, anybody could make that!"
However in the end I was naive, she was good at sales, and I was desperate for knowledge, so we signed up. However sure enough the curriculum was mostly focused on learning basic Microsoft Office products, and the programming sections barely scraped the surface of computer science; in retrospect, I doubt there was anybody there qualified to teach it at all. The only real lesson I learned was not to trust salespeople.
Thank god it's a lot easier for kids to just teach themselves programming these days online.
Nice story. Thank you share. For years, I struggled with the idea of "message passing" for GUIs. Later, I learned it was nothing more than the window procedure (WNDPROC) in the Win32 API. <sad face>
This sounds interesting. What is an "IT school"? (What country? They didn't have these in mine.)Probably institutes teaching IT stuff. They used to be popular (still?) in my country (India) in the past. That said, there are plenty of places which train in reasonable breadth in programming, embedded etc. now (think less intense bootcamps).
Most of all, I recall her saying something like "So what, you made a calculator? That's so simple, anybody could make that!"
This literally brings rage to the fore. Downplaying a kid's accomplishments is the worst thing an educator could do, and marks her as evil.
I've often looked for examples of time travel, hints it is happening. I've looked at pictures of movie stars, to see if anyone today has traveled back in time to try to woo them. I've looked at markets, to see if someone is manipulating them in weird, unconventional ways.
I wonder how many cases of "random person punched another person in the head" and then "couldn't be found" is someone traveling back in time to slap this lady in the head.
Hah, I also went down that route. Through my school I could do extra computer stuff, ended up with this certificate at 10 years old or so: https://en.wikipedia.org/wiki/International_Certification_of...
So yeah, a kid well-versed in Office. My birthday invites were bad-ass, though. Remember I had one row in Excel per invited person with data, and in the Word document placeholders, and when printing it would make a unique page per row in Excel, so everyone got customized invites with their names. Probably spent longer setting it up than it would've taken to edit their names + print 10 times separately, but felt cool..
Luckily a teacher understood what I really wanted, and sent me home with a floppy disk with some template web-page with some small code I could edit in Notepad and see come to live.
Salespeople are the cancer at the worlds butt.
For profit education is the problem here.
As soon as I read the title, I chuckled, because coming from the computational mathematics background I already knew what it roughly is going to be about. IEEE 754 is like democracy in a sense that it is the worst, except for all the others. Immediately when I saw the example I thought: it is either going to be either a Kahan summation or full-scale computer algebra system. It turned out to be some subset of the latter and I have to admit I have never heard of Recursive Real Arithmetic (I knew of Real Analysis though).
If anything that was a great insight about one of my early C++ heroes, and what they did in their professional life outside of the things they are known for. But most importantly it was a reminder how deep seemingly simple things can be.
IEEE 754 is what you get when you want numbers to have huge dynamic range, equal precision across the range, and fixed bit width. It balances speed and accuracy, and produces a result that is very close to the expected result 99.9999999% of the time. A competent numerical analyst can take something you want to do on paper and build a sequence of operations in floating point that compute that result almost exactly.
I don't think anyone who worked on IEEE 754 (and certainly nobody who currently works on it) contemplated calculators as an application, because a calculator is solving a fundamentally different problem. In a calculator, you can spend 10-100 ms doing one operation and people won't mind. In the applications for which IEEE 754 is made, you are expecting to do billions or trillions of operations per second.
William Kahan worked on both IEEE 754 and HP calculators. The speed gap between something like an 8087 and a calculator was not that big back then, either.
Billions or trillions of ops per second and 1987 don't really go together.
https://en.wikipedia.org/wiki/Cray-2
Good point! Side note: Cray-2 did not use IEE 754 floating point.
https://cray-history.net/2021/08/26/cray-floating-point-numb...
Cray did use floating point. It didn't use IEEE standard floating point. Floating point arithmetic is older than the transistor.
Yeah I know. I linked the specs.
Yeah I mean they were surely too old to support it. But the designers of IEEE-754 must have been aware of these systems when they were making the standard.
> equal precision across the range
What? Pretty sure there's more precision in [0-1] than there is in really big numbers.
Precision in numerics is usually considered in relative terms (eg significant figures). Every floating point number has an equal number of bits of precision. It is true, though, that half of the floats are between -1 and 1. That is because precision is equal across the range.
Only the normal floating point numbers have this property, the sub-normals do not.
In the single precision floats for example there is no 0.000000000000000000000000000000000000000000002 it goes straight from 0.000000000000000000000000000000000000000000001 to 0.000000000000000000000000000000000000000000003
So that's not even one whole digit of precision.
Yes, that is true. The subnormal numbers gradually lose precision going towards zero.
Subnormals are a dirty hack to squeeze a bit more breathing space around zero for people who really need it. They aren't even really supported in hardware. Using them in normal contexts is usually an error.
As of 2025, they finally have hardware support from Intel and AMD. IIRC it took until Zen 2 and Ice Lake to do this.
Oh joy! Just in time for all computation to move to GPUs running eight-bit "floats".
IEEE 754 is what you get if you started with the idea of sign, exponent, and fraction and made the most efficient hardware implementation of it possible. It's not "beautiful", but it falls out pretty straightforwardly from those starting assumptions, even the seemingly weirder parts like -0, subnormals and all the rounding modes. It was not really democratically designed, but done by numerical computing experts coupled with hardware design experts. Every "simplified" implementation of floating point that has appeared (e.g. auto-FTZ mode in vector units) has eventually been dragged kicking and screaming back to the IEEE standard.
Another way to see it is that floating point is the logical extension of fixed point math to log space to deal with numbers across a large orders of magnitude. I don't know if "beautiful" is exactly the right word, but it's an incredibly solid bit of engineering.
I feel like your description comes across as more negative on the design of IEEE-754 floats than you intend. Is there something else you think would have been better? Maybe I’m misreading it.
Maybe the hardware focus can be blamed for the large exponents and small mantissas.
The reasonable only non-IEEE things that comes to mind for me are:
- bfloat16 which just works with the most significant half of a float32.
- log8 which is almost all exponent.
I guess in both cases they are about getting more out of available memory bandwidth and the main operation is f32 + x * y -> f32 (ie multiply and accumulate into f32 result).
Maybe they will be (or already are) incorporated into IEEE standards though
Well, I do know some people who really hate subnormals because they are really slow on Intel and kinda slow on Arm. Subnormals I can see being a pain for graphics HW designers. I for one neither love nor hate IEEE 754, other than -0. I have spent far, far too many hours dealing with it. IMHO, it's an encoding artifact masquerading as a feature.
> what you get if you started with the idea of sign, exponent, and fraction and made the most efficient hardware implementation of it possible. It's not "beautiful", but it falls out pretty straightforwardly from those starting assumptions
This implies a strange way of defining what "beautiful" means in this context.
IEEE754 is not great for pure maths, however, it is fine for real life.
In real life, no instrument is going to give you a measurement with the 52 bits of precision a double can offer, and you are probably never going to get quantities are in the 10^1000 range. No actuator is precise enough either. Even single precision is usually above what physical devices can work with. When drawing a pixel on screen, you don't need to know its position down to the subatomic level.
For these real life situations, improving on the usual IEEE 754 arithmetic would probably be better served with interval arithmetic. It would fail at maths, but in exchange you get support for measurement errors.
Of course, in a calculator, precision is important because you don't know if the user is working with real life quantities or is doing abstract maths.
> IEEE754 is not great for pure maths, however, it is fine for real life.
Partially. It can be fine for pretty much any real-life use case. But many naive implementations of formulae involve some gnarly intermediates despite having fairly mundane inputs and outputs.
It becomes a problem when precision errors accumulate in a system, right?
The issue isn't so much that a single calculation is slightly off, it's that many calculations together will be off by a lot at the end.
Is this stupid or..?
> IEEE 754 is like democracy in a sense that it is the worst, except for all the others.
I can't see what would be worse. The entire raison d'etre for computers is to give accurate results. Introducing a math system which is inherently inaccurate to computers cuts against the whole reason they exist! Literally any other math solution seems like it would be better, so long as it produces accurate results.
Sometimes you need a number system which is 1. approximate 2. compact and fast 3. high dynamic range
You’re going to have a hard time doing better than floats with those constraints.
> so long as it produces accurate results
That's doing a lot of work. IEE-754 does very in terms of error vs representation size.
What system has accurate results? I don't know any number system at all in usage that 1) represents numbers with a fixed size 2) Can represent 1/n accurately for reasonable integers 3) can do exponents accurately
Electronic computers were created to be faster and cheaper than a pool of human computers (who may have had slide rules or mechanical adding machines). Human computers were basically doing decimal floating point with limited precision.
There's no "accurate results" most of the time
You can only have a result that's exact enough in your desired precision
It's ideal for engineering calculations which is a common use of computers. There, nobody cares if 1-1=0 exactly or not because you could never have measured those values exactly in the first place. Single precision is good enough for just about any real-world measurement or result while double precision is good for intermediate results without losing accuracy that's visible in the single precision input/output as long as you're not using a numerically unstable algorithm.
Define "accurate"!
Given a computer have finite memory but there are infinitely many real numbers in any range, any system using real numbers will have to use rounding.
The NYC subway fare is $2.90. I was using PCalc on iOS to step through remaining MetroCard values per swipe and discovered that AC, 8.7, m+, 2.9, m-, m-, m- evaluates to -8.881784197E-16 instead of zero. This doesn't happen when using Apple's calculator. I wrote to the developer and he replied, "Apple has now got their own private maths library which isn't available to developers, which they're using in their own calculator. What I need to do is replace the Apple libraries with something else - that's on my list!"
I wrote the calculator for the original blackberry. Floating point won't do. I implemented decimal based floating point functions to avoid these rounding problems. This sounds harder than it was, basically, the "exponent" part wasn't how many bits to shift, but what power of two to divide by, so that 0.1, 0.001 etc can be represented exactly. Not sure if I had two or three digits of precision beyond whats on the display. 1 digit is pretty standard for 5 function calculators, scientific ones typically have two. It was only a 5 function calculator, so not that hard, plus there was no floating point library by default so doing any floating point really ballooned the size of an app with the floating point library.
> the "exponent" part wasn't how many bits to shift, but what power of two to divide by, so that 0.1, 0.001 etc can be represented exactly
You mean what power of ten to divide by?
yes
Holy shit, I thought you just made mindblowing mechanical devices.
I can see why you wouldn't necessarily just want to use it, but I thought the RIM pager had a JVM with floating point?
I mostly just used mine for email.
The JVM based devices came years later. This was around 1998, with the 386 based blackbery pager that could only do emails over Mobitex, no phone calls. It even looked like a pager. At the time, phones were not so dominant, data switched over mobile only existed on paper, and two-way paging looked like it had a future. So we totally killed the crude 2-way paging networks that were out there. And RIM successfully later made the transition to phone networks. Wasn't till iPhone and android that RIM ran into trouble.
Yup, that's the one I had. Best keyboard I ever used on a pocket device. So I used your calculator, a little.
Sounds like he's just using stock math functions. Both Javascript and Python act the same way when you save the result immediately after subtracting two numbers multiple times, rather than just 8.7 - (2.9*3).
Almost no one has done that calculator app, properly. When I mean properly, I mean a complete calculator, like the TI-89.
I am using, in Android, and emulator for the TI-89 calculator.
Because no Android app has half the features, and works as well.
It's not even about features. Calculators are mostly useful for napkin math - if I can't afford an error, I'll take some full-fledged math software/package and write a program that will be debuggable, testable, and have version control.
But for some reason the authors of calculator apps never optimize them for the number of keypresses, unlike Casio/TI/HP. It's a lost art. Even a simple operator repetition is a completely alien concept for new apps. Even the devs of the apps that are supposed to be snappy, like speedcrunch, seem to completely misunderstand the niche of a calculator, are they not using it themselves? Calculator is neither a CAS nor a REPL.
For Android in particular, I've only found two non-emulated calculators worth using for that, HiPER Calc and 10BA by Segitiga.Pro. And I'm not sure I can trust the correctness.
I find that much of the time I want WolframAlpha for when basic arithmetic, because I like the way it tracks and converts units. It's such a simple way to check that my calculation isn't completely off base. If I forget to square something or I multiply when I meant to divide I get an obviously wrong answer.
Plus of course not having to do even more arithmetic when one site gives me kilograms and another gives me ounces.
qalc also tracks and converts units, and is open source (practical benefit: runs offline). I have it on Android via a Debian subsystem but just checked and Termux simply has it also (pkg install qalc)
Random example off the top of my head to show off some features: say it takes 5 minutes to get to space, and I heard you come around every 90 minutes, but there's differing definitions on whether space is 80 or 100 km above the surface, then if you're curious about the G forces during launch:
(the output has color coding for units, constants, numbers, and operators)It understands unicode plusminus for uncertainty tracking, units, function calls like log(n,base), find factors, it will do currencies too if you let it download a table from the internet... I love this software package. (No affiliation, just a happy user who discovered this way too late in life)
It's not as clever as WolframAlpha, no natural language parsing or Pokédex functions (sometimes I do wish that it knew things like Earth radii), but it also runs anywhere and never tells you the computation took too long and so was cancelled
Edit: I just learned there's now also an Android app! https://github.com/jherkenhoff/qalculate-android | https://f-droid.org/packages/com.jherkenhoff.qalculate/ I've checked before and there wasn't back then, so this is cool. This version says it supports graph plotting which the command-line version doesn't do
Qalculate! has been my go-to calculator on my laptop for years, very happy to have it on my phone now too! And it definitely knows planet radii, try `planet("earth"; "radius")`. Specifically, it knows a bunch of attributes about atoms (most importantly to me, their atomic mass) and planets (including niche things like mean surface temperature). You can see all the data here: https://qalculate.github.io/manual/qalculate-definitions-fun...
Woot! I need to read the docs better! Thanks for sharing that :D
If you're willing to learn to work with RPN calculators (which I think is a good idea), I can recommend RealCalc for Android. It has an RPN mode that is very economic in keypresses and it's clear the developers understand how touchscreens work and how that ties into the kind of work pocket calculators are useful for.
My only gripe with it is that it doesn't solve compounding return equations, but for that one can use an emulated HP-12c.
RealCalc Plus is great on the Android side. If using iPhone/iPad/macOS, try BVCalc. Its RPN mode shows you the algebraic expression (i.e., using infix notation display) for each item on the stack, which both helps you check for entry mistakes and also more easily keep track of what each stack item represents. I haven't found another RPN calculator that can do this.
On Android, I just went straight to using an emulator of the HP42S that got me through engineering school in the early 90s. The muscle memory for the basics was still there, even if I can't remember how to use the advanced functions any more.
I still have my actual HP, but it seems to chew batteries now.
Proper ones are certainly usable for more than napkin math. I deal with fairly simple definite integrals and linear algebra occasionally. It's easier for me to plug this into a programmable calculator than it is to scratch in the dirt on Maxima or Mathematica most of the time if I just need an answer.
This relates to what I wrote in reply to the original tweet thread.
Performing arithmetic on arbitrarily complex mathematical functions is an interesting area of research but not useful to 99% of calculator users. People who want that functionality with use Wolfram Alpha/Mathematica, Matlab, some software library, or similar.
Most people using calculators are probably using them for budgeting, tax returns, DIY projects ("how much paint do I need?", etc), homework, calorie tracking, etc.
If I was building a calculator app -- especially if I had the resources of Google -- I would start with trying to get inside the mind of the average calculator user and figuring out their actual problems. E.g., perhaps most people just use standard 'napkin math', but struggle a bit with multi-step calculations.
> But for some reason the authors of calculator apps never optimize them for the number of keypresses, unlike Casio/TI/HP. It's a lost art. Even a simple operator repetition is a completely alien concept for new apps.
Yes, there's probably a lot of low-hanging fruit here.
The Android calculator story sounded like many products that came out of Google -- brilliant technical work, but some sort of weird disconnect with the needs of actual users.
(It's not like the researchers ignored users -- they did discuss UI needs in the paper. But everything was distant and theoretical -- at no point did I see any mention of the actual workflow of calculator users, the problems they solve, or the particular UI snags they struggle with.)
I'm the developer of an Android calculator, called Algeo [1] and I wonder which part of it that makes it feel like slow/not snappy? I'm trying to constantly improve it, though UX is a hard problem.
[1] - https://play.google.com/store/apps/details?id=com.algeo.alge...
This seems to be an expression mode calculator. It simply calculates the result of an expression, which makes it like the other 999 calculators in the Play Store.
Classic algebraic calculators are able to to things like:
It doesn't have to be basic arithmetic, this way you can do complex numbers, trigonometry, stats, financial calculations, integrals, ODEs etc. Just have a way to juggle operands and inverse operators, and some quick registers/variables one keypress away (see the classic MS/MR mechanism or the stack in RPN). RPN calculators can often be more efficient, although at the cost of some entry barrier.That's what you do with the classic calculators. Often, you are not even directly calculating things, you're augmenting your intuition and offloading a part of the problem to quickly explore its space in a few keypresses (what if?..), give a guesstimate, and do some sanity checks on whether you're in the right ballpark, all at the same time. Graphing, dimensional analysis in physics, error propagation help a lot in detecting bullshit in your estimates as quickly as possible. If you're also familiar with numerical methods, you can do miracles at the speed of thought. Slide rules were a lot like that as well.
People who do this might not be your target audience, though.
Another app nobody has made is a simple random music player. Tried VLC on Android and adding 5000+ songs from SD card into a playlist for shuffling simply crashes the app. Why do we need a play list anyway, just play the folder! Is it trying to load the whole list at the same time into memory? VLC always works, but not on this task. Found another player that doesn't require building a playlist but when the app is restarted it starts from the same song following the same random seed. Either save the last one or let me set the seed!
Termux:
(Or whatever command-line player you already have installed. I just tested with espeak that audio in Termux works for me out of the box and saw someone else mentioning mplayer as working for them in Termux: https://android.stackexchange.com/a/258228)- It generates a list of all files in the current directory, one per line
- Shuffles the list
- Takes the top entry
- Gives it to mplayer as an argument/parameter
Repeat the last command to play another random song. For infinite play:
(Where !! substitutes the last command, so run this after the find...mplayer line)You can also stick these lines in a shell script, and I seem to remember you can have scripts as icons on your homescreen but I'm not super deep into Termux; it just seemed like a trivial problem to me, as in, small enough that piping like 3 commands does what you want for any size library with no specialised software needed
> Another app nobody has made is a simple random music player.
Marvis on iOS is pretty good at this. I use it to shuffle music with some rules ("low skip %, not added recently, not listened to recently")[0] and it always does a good job.
[0] Because "create playlist" is still broken in iOS Shortcuts, incredibly.
I'm pretty sure the paid version of PowerAmp for Android will do what you want, with or without explicitly creating a playlist.
I have many thousands of mp3s on my phone in nested folders. PowerAmp has a "shuffle all" mode that handles them just fine, as well as other shuffle modes. I've never noticed it repeating a track before I do something to interrupt the shuffle.
Earlier versions (>~ 5 years ago) seemed to have trouble indexing over a few thousand tracks across the phone as a whole, but AFAIK that's been fixed for awhile now.
I can recommend PowerAmp. I've been using it for over a decade and it's been pretty happy with updating my 20,000+ song collection and my 1,000+ song playlist that I sync with an graphical ssh/rsync wrapper (although I've actually been switching to an rclone wrapper, RoundSync, in the last few months).
My personal favorite feature that I got addicted to back when I was using Amarok in KDE 3 was the ability to have a playlist and a queue that resumes to the playlist when exhausted. Then I can listen to an album in order, and then go back to shuffling my driving music playlist when that's done.
Anything that just shuffles on the filesystem/folder level works for this. Even my Honda Civic's stereo does it. Then you have iTunes, which uses playlists, and doesn't work. It starts repeating songs before it exhausts the playlist.
Ah, the old “should a random shuffle repeat songs” debate. Haven’t thought about that in years.
I’m with you in that I think shuffle should be a single list of all songs, played in a random order. But that requires maintaining state, detecting additions and updating the list, etc.
Years ago, a friend was adamant that shuffle should mean picking a random song from the list each time, without state, and if that means the same song plays five times in a row, well, that’s what random means.
> I think shuffle should be a single list of all songs, played in a random order. But that requires maintaining state, detecting additions and updating the list, etc.
You should be able to accomplish this with trivial amounts of state (as in, somewhere around 4 ints).
As an example, I'm envisioning something based on Fermat's little theorem -- determine some prime `p` at least as big as the number of songs you have (N), then to determine the next song, use n := a*n mod p for fixed choice of 1 < a < p, repeating as necessary as long as n > N. This should give you a deterministic permutation of the songs. When you get back to the first song you've played, you can choose to pick a new `a` for a new shuffle, or you can just keep that permutation.
If the list of songs changes, pick new a, p, and update n to be the new position of your current song (and update your notion of "first song of this permutation").
(Regarding why this works: you want {a} to be a generator for the multiplicative group formed by Z/pZ.)
Linear congruential generators have terrible properties if you care about the quality of your randomness, but if all you're doing is shuffling what order your songs play in, they're fine.
Thanks!! I've been looking for an algo to draw pixel positions in a pseudorandom way only once. I didn't know a way to do it without storing and shuffling all positions. Now, I only need to draw a centered filled circle, so there might be a prime number for it, and even if the prime only does it for a given amount of points, I could switch to other primes until the circle is filled, and get an optimal and compressed single-visit scattering algo.
Have you seen this algorithm for dithering? Reminds me of your problem.
https://news.ycombinator.com/item?id=42808889
Yes, but another requirement is the algo to be very fast, and dithering takes way more operations than the proposed Fermat PRNG.
this is something you should read https://extremelearning.com.au/unreasonable-effectiveness-of... (it's effectively the fermat prng described, but goes into more depth)
Mind you, placing many pixels at coordinates from a linear congruential generator will not look random at all.
You may have mathed over my head, but I’m not seeing how it avoids playing already-played songs when the list is expanded.
Say I have a 20 song list, and after listening to 15 I add five more. How does this approach only play the remaining 10 songs (5 that were remaining plus 5 new)?
> Say I have a 20 song list, and after listening to 15 I add five more. How does this approach only play the remaining 10 songs (5 that were remaining plus 5 new)?
It doesn't. If you add 5 more songs, then the algorithm as presented will just treat it as if you're starting a new shuffle.
If you genuinely need to keep track of all the songs you've already played and/or the songs that you have yet to play, then I'm not sure you can do much better than keeping a list of the desired play order, randomized via Fisher-Yates shuffle each time you want a new shuffled ordering -- new songs can be appended to said list and shuffled in with the as-yet-unplayed songs.
One way to do it without retaining additional state would be to generate the initial shuffle for N > current song list. If the new songs' indices come up, they get played. You skip any indices that don't correspond to a valid song when it's time to play them.
This has some obvious downsides (e.g. an empty slot that was skipped when played and filled by a later insert won't be played), but it handles both insertion and deletions without replaying songs and you only need to store a single integer.
> It doesn't. If you add 5 more songs, then the algorithm as presented will just treat it as if you're starting a new shuffle.
You probably shouldn't have quoted "detecting additions and updating the list, etc." then.
This only works for a fixed list.
Eh, it depends what you mean by "works". If you mean that if you add new songs in the middle of playback, it doesn't guarantee that every song is played exactly once before any are repeated, sure, but you can't really do that unless you're actually tracking all of the songs.
Many approaches that guarantee that property have pathological behavior if, say, you add a new song to your library after each song that you've played.
I’d suggest the general solution: the machine can keep a list of the songs it has played, and bump the oldest entries off the list. The list length can be user configurable, 0 handles your truly random friend, 1 would be enough to just avoid immediate repeats, or it could be set to the size of the library. 100 could, I think, give you enough time to not notice any repeats I think, right?
I'm comfortable with "random play" meaning we're going to pick at random each time but I'm not OK with the idea that's "shuffle" shuffle means there were a list of things and we shuffled it. Rolling a D20 is random but it's not shuffling. Games with a random element deliberately (if they're well designed) choose whether to have this independence or not in their design.
A shuffle is type of permutation. There is room to disagree on the constraints on the type of permutations allowed and how they are made. Nevertheless, I 100% agree that sampling with replacement is not a shuffle.
While I agree with you, as soon as the semantics of “random” vs “shuffle” enter the conversation, lay people are lost.
To me “shuffle” is a good metaphor because a shuffled deck of cards works a specific way (you’d be very surprised to draw the same card twice in a row!)
But these things are implemented by programmers who sometimes start with implementation (“random”) and work back to user experience. And, for a specific type of technical person, “with replacement” is exactly what they’d expect.
If you let programmers do randomness you're in a world of pain.
On the whole programmers given a source of random bytes and told to pick any of 227 songs at random using this data will take one byte, compute byte % 227 and then be astonished that now 29 of the songs are twice as likely as the others to be chosen†.
In a class of fifty my guess is you're lucky if one person asks whether the random bytes are cheap (and so they should just throw away any that aren't < 227) showing they know what "random" means and all the rest will at least attempt that naive solution even if some of them try it out and realise it's not good enough.
† As a bonus in some languages expect some solutions to never pick the first song, or never pick the last song.
My favorite example of RNG misuse resulting in sampling bias is the general approach that looks like `arr.sort(() => Math.random() - 0.5)`.
> you're lucky if one person asks whether the random bytes are cheap (and so they should just throw away any that aren't < 227)
If you can't deal with the 10% overhead from rejection sampling (assuming your random bytes are uniform), I guess you could try mushing that entropy back into the rest of your bytestream, but yuck.
Wow, that's an abusive ordering function. Presumably this is a thing people might write in... Javascript? And I'm guessing Javascript has to put up with them doing this and they get a coherent result, maybe it's even shuffled, because eh, it worked in one browser so we're stuck with it.
In Rust this abuse would either "work" or panic telling you that er, that's not a coherent ordering so you need to stop doing that. Not certain whether the panic can only arise in debug builds (or whether it would detect this particular abuse, it's not specified whether you will panic only that you might if you don't provide a coherent ordering).
In C++ this is Undefined Behaviour and there's a fair chance you just introduced an RCE vulnerability into your codebase.
It is shuffled badly, and not at all uniformly. And heavily implementation-dependent.
https://stackoverflow.com/questions/962802/is-it-correct-to-...
An example of this out in the wild: https://www.robweir.com/blog/2010/02/microsoft-random-browse...
I guess that would be the difference between sorting the list of songs versus picking from the list of songs.
I literally want to do this but for social app, but for completely anonim user
Any info on how can I achieve this
You must track the permutation you're stepping through.
E.g. you have 4 items. You shuffle them to get a random permutation:
4 2 1 3
Note: these are not indices, but identifiers. Let's say you go through the first two items:
4 2 <you're here> 1 3
And two new items arrive. You insert each item into a random position among the remaining items. E.g:
4 2 <you're here> 5 1 6 3
If items are to be deleted, there are two cases: either they have already been visited, in which case there's nothing to do, or they're in the remaining list, in which case you have to delete them from there.
I'm really enjoying the discussion on how shuffle means different things to different people (I personally prefer random, but implementing `shuffle` specifically sounds fun with all of this)
> You insert each item into a random position among the remaining items
Thinking about shuffle + adding, I would have thought "even if it's added to a past position", e.g.
`5 4 6 21 3` as valid.
What do folks expect out of shuffle when it reaches the end? A new shuffle, or repeat with the same permutation?
I think all of this depends on the UI presentation, but when “shuffle” is used, I think a good starting point is “what would a person expect from a deck of cards”, since that’s where the metaphor started.
I don’t think that provides a totally clear answer to “what happens at the end”, but for me it’d lean me towards “a new shuffle”, because for me most of the time a shuffled deck of cards draws its last card, the deck will be shuffled again before drawing new cards.
I haven't used it in a while (now using streaming...), But Musicolet (https://krosbits.in/musicolet/) should be able to do this. Offline-only and lightweight.
Hey I found it a couple of hours ago, but it doesn't show the shuffled order of songs. Anyway it's the best so far.
I'd love to hear more about this. What was the other one you found? I wrote Tiny Player for iOS and another one for Mac and as more of an "album listener" myself I always struggled to keep the shuffle functionality up to other peoples expectations.
That is Lark Player, but it has so many ads that I recently uninstalled and kept trying the recommendations in this thread. Foobar2000 uses a system modal to let you add folders but the SD card is locked by the system on that modal even after enabling permissions, other apps can access it without issues. Samsung music player can only add up to 1000 songs per playlist and there is no easy way to split my library. And I just found Musicolet that uses playlist and doesn't crash when adding my library, but it would be perfect if it could show the randomized order of the playlist, so it just jumps on random songs, it would be cool to know what's next and before. Winamp (WACUP) on desktop does this perfectly.
I suspect nobody made xxx vlc plug-in because compiling it is far too hard.
I tried to make a joystick controller for a particular use case on one platform (Linux) and I gave up.
VLC solves a hard problem. Supporting lots of different libs, versions, platforms, hardware and on top of that licensing issues.
Vanilla Music can play and shuffle an entire folder
Searching for Vanilla Music in Google Play shows every other player except that one, which reminds me nobody makes search engines anymore.
Vanilla Music isn't on the Play store, get it from F-Droid.
<https://f-droid.org/en/packages/ch.blinkenlights.android.van...>
Thanks, I didn't know it was no longer on there.
https://github.com/vanilla-music/vanilla >Note: As of 23. Jun 2024, Vanilla Music is no longer available in the Google Play store: I simply don't have time to comply with random policy changes and verification requests. Any release you see there is probably an ad-infested fork uploaded by someone else.
I prefer Vinyl Music Player: https://f-droid.org/en/packages/com.poupa.vinylmusicplayer/
Just tried it... Couldn't figure out in 5 minutes how to add files to a playlist. Pass.
GP didn't want to use a playlist.
To create and add to one: long-press on a file/folder/track/album for the context menu or use the ... menu while in the now playing screen.
https://github.com/vanilla-music/vanilla
Mediamonkey allows me to just go to tracks and hit shuffle and then it randomly adds all my tracks to a queue with no repeats. You can do it at any level of hierarchy, allmusic, playlist, album, artist, genre etc.
Edit: I checked I can also shuffle a folder without adding it to the library.
The Samsung music player can do that but you need a Samsung phone.
Tried it but it can only add up to 1000 songs per playlist.
I just hit shuffle and it plays all the media files on my phone (including voice memos which is annoying)
Foobar2000 has no problems in these areas
Tried it, but seems to use the system's modal to add folders, that blocks the SD card folder due to "privacy reasons", even after giving the app permission to access all files.
Well, this one is on google. "Full filesystem access" is restricted to specific classes of apps like file managers, and replacement api is very shitty (have lots of restrictions, slower by orders of magnitude)
I've put a lot of effort into mine, TechniCalc
I've been working on it for what will be a decade later this year. It tries to take all the features you had on these physical calculators, but present them in a modern way. It works on macOS, iOS, and iPad OS
With regards to the article, I wasn't quite as sophisticated as that. I do track rationals, exponents, square roots, and multiples of pi; then fall back to decimal when needed. This part is open source, though!
Marketing page - https://jacobdoescode.com/technicalc
AppStore Link - https://apps.apple.com/gb/app/technicalc-calculator/id150496...
Open source components - https://github.com/jacobp100/technicalc-core
I am seriously curious when it became not a violation of the principle of least surprise that a calculator app uses the network to communicate information from my device (which definitionally belongs to me) to the developer.
Where I am standing, that never happened, but that would require that a simply staggering number of people be classified as unreasonable.
The only network requests are for currency rates, and Sentry crash logs. I don’t collect analytics other than these crash logs
https://jacobdoescode.com/privacy
It is the Sentry surveillance to which I refer. The crashes on my computer do not belong to you.
You can't please everyone. Do note these are only sent when something goes wrong in the app. To give you an indication of how little I collect, for the last 30 days, I can see 3,700 app sessions (only includes people who opted into Apple's analytics), and 14 reports of exceptions within the app. That's fewer than 0.4% of users.
You seem to have completely misunderstood me. The amount of data that is or isn’t collected isn’t relevant.
Just tried the TI-89 emulator on android and it says 1e100 + 1 - 1e100 is 0
Here I got 1 as the answer:
https://imgur.com/a/TH14QZn
Aha! So 1e100 doesn't work but 1^100 does. Ok thanks!
1^100 = 1 though
That's a typo in the response, I believe. The screenshot shows 10^100 (10 to the power of 100).
1e100 is a float.
HP42 on Android checks out
My favorite Android calculator is RealCalc. I've been using it since I got it for PalmOS about 25 years ago.
https://play.google.com/store/apps/details?id=uk.co.nickfine...
I use it too, but it also computes 0 instead of 1 for:
TI-89 doesn't have infinite precision.
Built-in Android calculator does.
They are incomparable. TI-89 has tons of features, but can't take a square foot to high accuracy.
Well the 89 is a CAS in disguise most of the time which is mentioned in passing in the article.
But, I agree I almost never want the full power of Mathematica/sage initially but quickly become annoyed with calc apps. The 89 and hp prime//50 have just enough to solve anything where I wouldn’t rather just use a full programming language.
HiPER Calc Pro looks like and works like a "physical" calculator, I've use it for years to get effect. I also have Wabbitemu but hardly ever use it, the former works fine for nearly everything.
I just installed it, for about 2 Eur.
Thanks for the heads up, I will be testing it for a few months, to see if it can replace the TI-89 emulator as my main calculator.
Edit: that calculator gives a result of 0 on this test
Same here, except I use an HP48 emulator because TI sucks and HP rocks.
That's not surprising, considering the TI-89 was based on a full CAS, Derive. (A 3.3 MB program!)
https://en.wikipedia.org/wiki/Derive_(computer_algebra_syste...
Can you tell me which emulator you're using? I loved using the open source Wabbitemu on previous Android phones, but it seems to have been removed from the app store, so I can't install it on newer devices :-/
https://f-droid.org/en/packages/com.eanema.graph89/
GeoGebra, in fact, embeds xcas and has a touchscreen friendly UI.
And the most usable calculator on iOS is easily GrafNCalc83 (a very close TI-83 homage), IMO, even for simple math.
Yes that. I use free42 on my phone and mac. And an actual real HP 42S.
Edit: and Maxima as well on the mac (to back up another user's comment)
Maxima
Yep, I use that in the laptop. But the smartphone still needs a good one calculator App.
Geogebra and Desmos
> And almost all numbers cannot be expressed in IEEE floating points.
It is a bit stronger than that. Almost all numbers cannot be practically expressed and it may even be that the probability of a random number being theoretically indescribable is about 100%. Depending on what a number is.
> Some problems can be avoided if you use bignums.
Or that. My momentary existential angst has been assuaged. Thanks bignums.
> existential angst
The best (and most educational) expression of that angst that I know: https://mathwithbaddrawings.com/2016/12/28/why-the-number-li....
Uncomputable! That was the word I wanted. I only remembered constructability but that was something else. Thanks.
Wait. Could we in principle find more ways to express some of those uncomputable numbers, or have we conclusively proven we just can't reach them - can't identify any of them in any way we could express?
EDIT: let me guess - there is a proof, and it's probably a flavor of the diagonal argument, right?
For all real numbers in bulk— You may call it a diagonal argument, but it’s just a reduction to Cantor’s original statement, no new proof needed. There are only countably many computable numbers, because there are only countably many programs, because there are only countably many finite strings over any finite alphabet[1].
For individual real numbers— There are of course provably uncomputable ones. Chaitin’s constant is the poster child of these, but you could just take a map of (number of Turing machine in any numbering of all of them) to (terminates or not) and call that a binary fraction. (This is actually not far away from Chaitin’s constant, but the actual one is reweighted a bit to make it more meaningful.) Are there unprovably uncomputable ones? At a guess I’d say so, but I’m not good enough to give a construction offhand.
[1] A countable union of (finite or) countable sets is finite. Rahzrengr gur havba nf sbyybjf: svefg vgrz bs svefg frg; frpbaq vgrz bs svefg frg, svefg vgrz bs frpbaq frg; guveq vgrz bs svefg frg, frpbaq vgrz bs frpbaq frg, svefg vgrz bs guveq frg; rgp. Vg’f snveyl boivbhf gung guvf jbexf, ohg vs lbh jnag gb jevgr gur vairefr znccvat rkcyvpvgyl lbh pna qenj guvf nf n yvar guebhtu gur vagrtre cbvagf bs n dhnqenag.
You have a typo in [1]. It should read "A countable union of (finite or) countable sets is countable."
I do, thanks :)
> Rahzrengr gur havba nf sbyybjf: svefg vgrz bs svefg frg; frpbaq vgrz bs svefg frg, svefg vgrz bs frpbaq frg; guveq vgrz bs svefg frg, frpbaq vgrz bs frpbaq frg, svefg vgrz bs guveq frg; rgp. Vg’f snveyl boivbhf gung guvf jbexf, ohg vs lbh jnag gb jevgr gur vairefr znccvat rkcyvpvgyl lbh pna qenj guvf nf n yvar guebhtu gur vagrtre cbvagf bs n dhnqenag.
You lost me here.
Try: https://cryptii.com/pipes/rot13-decoder
Ok what was the point of that? It just needlessly degrades an otherwise informative post.
You're seriously asking _me_ what https://news.ycombinator.com/item?id=43068787 was thinking when they included a ROT-13 paragraph?
Typically, since pre-WWW UseNet days it's been used as a standard "no-spoiler" technique so that those who don't want to see a movie twist, puzzle answer, etc don't accidently eyeball scan the give away.
BTW, you're welcome, glad I could help.
Thanks for your help, I didn't mean to attack you.
The point is that, in my estimation, the statement in the footnote is a good exercise (provided that you don’t already know it, that it’s not immediately obvious to you, and that you’re still into set theory enough to know what countability and the diagonal argument are). I was initially tempted to just leave it as such, but then thought I’d provide the solution under a spoiler.
Thanks for clarifying. I'm not that young anymore, but I haven't seen this sort of spoiler tagging since forever (assuming that I ever saw it), so I just really didn't know what was going on. Maybe a simple reference to ROT13 at the beginning of your spoiler would have helped.
Yes there's a proof. One flavor is that in any system for expressing numbers using symbols, you can show a correspondence between finite strings of symbols, and whole numbers. So, what works for whole numbers also works for things like proofs and formulas. I think the correspondence may be called "Goedel numbering."
There are two ways to read that question.
If hypercomputation is possible, then there might be a way to express some of those uncomputable numbers. They just won't be possible with an ordinary Turing machine.
(If description is all you need, then it's already possible to describe some uncomputable numbers like Chaitin's constant. But you can't reliably list its digits on an ordinary computer.)
As for the other interpretation, "have we conclusively proven we can't reach them with an ordinary computer", IIRC, the proof that there are infinite uncomputable numbers is as follows: Consider a finitely large program that, when run, outputs the number in question. This program can be encoded as an integer - just read its (binary or source) bytes as a very large base-256 number. Since the set of possible programs is no larger than the set of integers, it's (at most) countably infinite. However, the real numbers are uncountably infinite. Thus a real number is almost never computable.
The proof of the existence of uncomputable numbers is very simple: there are countably many computer programs, but uncountably many numbers.
BTW: I tried constructing a new number that could not be computed by any other Turing machine using a variant of the diagonilization argument. Basically, enumerate all Turing machines that generate numbers:
Turing machine 1
Turing machine 2
Turing machine 3
...
Now construct a new Turing machine that produce a new number in which the first digit is the first digit of Turing machine 1, the second is the second digit of Turing machine 2, etc. Now add 1 (with wrap-around) to each digit.
This will generate a new number that cannot be generated by any of the existing Turing machines.
The bug with this argument (as ChatGPT pointed out) is that because of the halting problem, we cannot guarantee that any specific Turing machine will halt, so the constructed program will not halt, and thus cannot actually compute a number.
SMBC manages to do it in a 4 panel comic: https://www.smbc-comics.com/comic/real-4
> Almost all numbers cannot be practically expressed
That's certainly true, but all numbers that can be entered on a calculator can be expressed (for example, by the button sequence entered in the calculator). The calculator app can't help with the numbers that can't be practically expressed, it just needs to accurately approximate the ones that can.
This behaviour is what you get in say a cheap 1980s digital calculator, but it's not what we actually want. We want correct answers and to do that you need to be smarter. Ideally impossibly smart, but if the calculator is smarter than the person operating it that's a good start.
You're correct that the use of the calculator means we're talking about computable numbers, so that's nice - almost all Reals are non-computable but we ruled those out because we're using a calculator. However just because our results are Computable doesn't let us off the hook. There's a difference between knowing the answer is exactly 40 and knowing only that you've computed a few hundred decimal places after 40 and so far they're all zero, maybe the next one won't be.
> There's a difference between knowing the answer is exactly 40 and knowing only that you've computed a few hundred decimal places after 40 and so far they're all zero, maybe the next one won't be.
I would guess that if you pulled in a random sample of 2000 users of pocket calculators and surveyed their use cases you would find a grand total of 0 of them in which the cost function evaluated on a hundredth-decimal place error is at all meaningful.
In other words, no, that difference is not meaningful to a user of a pocket calculator.
What don't seriously curious people deserve pocket calculators?
Are you implying that "seriously curious people" as a group need more than a hundred digits? If so, I don't think I agree.
Otherwise I can't figure out what you mean.
The sine of x is 0 when x is any integer multiple of pi - it's not approximately zero, it's really zero, actual zero. So clearly some formulae can actually be zero. If I'm curious, I might wonder whether eventually the formula e * -x reaches zero for some x
e ** -10 is about 0.000045399929 and presumably you agree that's not zero
e ** -100 is about 3.72 times 10 ** -44, is that still not zero? An IEEE single precision floating point number has a non-zero representation for this, although it is a denormal meaning there's not very much precision left...
e ** -1000 is about 5.075 times 10 ** -435 and so it won't fit in the IEEE single or double precision types. So they both call this zero. Is it zero?
If you take the naive approach you've described, the answer apparently is yes, non-zero numbers are zero. Huh.
[Edited, fixed asterisks]
I'm not particularly worried that you would be unable to recognize patterns or rounding. Sorry you confused your hypothetical self.
And for the record, since we're talking about hundred digit numbers, as an IEEE float that would mean 23 exponent bits and you'd have to go below 10e-4000000 before it rounds to zero. Or 32 exponents bits if you follow some previous software implementations.
> And for the record, since we're talking about hundred digit numbers, as an IEEE float that would mean 23 exponent bits and you'd have to go below 10e-4000000 before it rounds to zero. Or 32 exponents bits if you follow some previous software implementations.
Um, no. Have you confused your not-at-all hypothetical self? Are you mistaking the significand, aka the mantissa for an exponent? The significand in a 32-bit "single precision" IEEE float is 23 bits (with an implied leading 1 bit for normal values)
When I wrote that example I of course tried this, since it so happens that I have been testing some conversions recently so...
Please refer back to this post: https://news.ycombinator.com/item?id=43070827
That's the first thing I said in this conversation. I did not ever suggest that single precision was enough. A hundred digits is beyond octuple precision. Octuple has 19 exponent bits, and in general every step up adds 4 more.
And going further up the comment chain, the original version was your mention of computing 40 followed by hundreds of digits of precision.
If I’m using a pocket calculator for a problem then for all intents and purposes 1×10⁻⁴⁰ = 0
Does it matter that some numbers are inexpressible (i.e., cannot be computed)?
I don't think it matters on a practical level--it's not like the cure for cancer is embedded in an inexpressible number (because the cure to cancer has to be a computable number, otherwise, we couldn't actually cure cancer).
But does it matter from a theoretical/math perspective? Are there some theorems or proofs that we cannot access because of inexpressible numbers?
[Forgive my ignorance--I'm just a dumb programmer.]
Well some classical techniques in standard undergraduate real analysis could lead to numbers outside the set of computable numbers, so if you don't allow non-computable numbers you will need to be more careful in the theorems you derive in real analysis. I do not believe that is important however; it's much simpler to just work with the set of real numbers rather than the set of computable numbers.
We know of at least one uncomputable number - Chaitin's constant, the probability that any given Turing machine halts.
Personally, I do wonder sometimes if real-world physical processes can involve uncomputable numbers. Can an object be placed X units away from some point, where X is an uncomputable number? The implications would be really interesting, no matter whether the answer is yes or no.
It doesn't "matter", but it's interesting to probe the boundary between the easily accessible world and the probably inaccessible world.
Non-discrete real-number-based Fractals are a beautiful visual version of this.
> Almost all numbers cannot be practically expressed and it may even be that the probability of a random number being theoretically indescribable is about 100%. Depending on what a number is.
A common rebuke is that the construction of the 'real numbers' is so overwrought that most of them have no real claim to 'existing' at all.
I worded that sentence carefully, when I said “almost all” :)
That's pretty cool, but the downsides of switching to RRA are not only about user experience. When the result is 0.0000000..., the calculator cannot decide whether it's fine to compute the inverse of that number.
For instance, 1/(atan(1/5)-atan(1/239)-pi/4) outputs "Can't calculate".
Well alright, this is a division by zero. But then you can try 1/(atan(1/5)-atan(1/239)-pi/4+10^(-100000)), and the output is still "Can't calculate" even though it should really be 10^100000.
You missed a 4. You are trying to say 1/(4atan(1/5)-atan(1/239)-pi/4) is a division by zero. On the other hand 1/(atan(1/5)-atan(1/239)-pi/4) is just -1.68866...
Good catch, thank you.
I played around with the calculator source code from the Android Open Source Project after a previous submission[1]. I think Google moved it from AOSP to the Google Play Services several years ago, but the old source is still available.
It does solve some real problems that I'd love to have available in a library. The discussion on the previous article links to some libraries, but my recollection is that the calculator code is more accessible to an innumerate person like myself.
Edit: the previous article under discussion doesn't seem to be available, but it's on archive.org[2].
[1] https://news.ycombinator.com/item?id=24700705
[2] https://web.archive.org/web/20250126130328/https://blog.acol...
> I think Google moved it from AOSP to the Google Play Services several years ago, but the old source is still available.
For the curious, here is the source of ExactCalculator from the commit before all files were deleted: https://android.googlesource.com/platform/packages/apps/Exac..., and here is the dependency CR.java https://android.googlesource.com/platform/external/crcalc/+/...
The way this article talks about using "recursive real arithmetic" (RRA) reminds me of an excellent discussion with Conal Elliot on the Type Theory For All podcast. He talked about moving from representing things discretely to representing things continually (and therefore more accurately). For instance, before, people represented fonts as blocks of pixels, (discrete.) They were rough approximations of what the font really was. But then they started to be recognized as lines/vectors (continual), no matter the size, they represented exactly what a font was.
Conal gave a beautiful case for how comp sci should be about pursuing truth like that, and not just learning the latest commercial tool. I see the same dogged pursuit of true, accurate representation in this beatiful story.
- https://www.typetheoryforall.com/episodes/the-lost-elegance-...
- https://www.typetheoryforall.com/episodes/denotational-desig...
Thanks, that's a lovely analogy and I'm excited to listen to that podcast.
I think the general idea of converting things from discrete and implementation-motivated representations to higher-level abstract descriptions (bitmaps to vectors, in your example) is great. It's actually something I'm very interested in, since the higher-level representations are usually much easier to do interesting transformations to. (Another example is going from meshes to SDFs for 3D models.)
You might get a kick out of the "responsive pixel art" HN post from 2015 which implements this idea in a unique way: https://news.ycombinator.com/item?id=11253649
Thanks, that "responsive pixel art" is very cool!
> (Also I decided to try writing this thread in the style of a linkedin influencer lol, sorry about that.)
I hated reading this buzzfeedy style (or apparently LinkedIn-style?) moron-vomit.
I shouldn't complain, just ask my nearest LLM to rewrite this article^W scribbling to a less obnoxious form of writing..
I noticed this too, but I was confused because the calculator article was informative and interesting. It's entirely unlike the inept fluffy slop that gets posted to LinkedIn
I liked it.
I think it’s called broetry although perhaps the sentences are a little long for that.
Some quick research yields a couple of open source CAS, such as OpenAxiom, which uses the Modified BSD license. Granted that Google has strong "NIH" tendencies, but I'm curious why something like this wasn't adapted instead of paying several engineers some undisclosed amount of time to develop a calculation system.
The article mentions that a CAS is an order of magnitude (or more!) more complex than the bifurcated rational + RRA approach, as well as slower, but: the complexity would be solved by adapting an open source solution, and the computation speed wouldn't seem to matter on a device like an Android smartphone. My HP Prime in CAS mode runs at 400MHz and solves every problem the Android calculator solves with no perceptible delay.
Is it a matter of NIH? A legal issue with the 3-clause BSD license I don't understand? Reducing binary size? The available CAS weren't up to snuff for one reason or another? Some other technical issue? Or, if not that, why not use binary-coded decimal?
These are just questions, not criticisms. I have very very little experience in the problem domain and am curious about the answers :)
To make any type of app really good is super hard.
I have yet to see a good to-do list tool.
I'm not kidding. I tried TickTick, Notion, Workflowy ... everything I tried so far feels cumbersome compared to how I would like to handle my To-Do list. The way you create, edit, browse, drag+drop items is not as all as fluid as I imagine it.
So if anyone knows a good To-Do list software (must be web based, so I can use it anywhere without installing something) - let me know!
To-Do List is an infinite product category.
They are extremely personal and any unwanted features end up as friction.
You'll never find a perfect Todo app because it will have an audience of 1 so wouldn't be made.
Other examples of Todo apps:
Things, 2Do, Todoist, OmniFocus, Due, Reminders (Apple), Clear, GoodTask, Notes, Google Keep
The list is literally neverending,
Why does a to-do list have to have any features by default? It could be a blank screen with a "settings" sign in the upper right, where you can enable just the features you need.
If I don't find such a software, I will write it myself. I actually already started:
https://x.com/marekgibney/status/1844077244903571549
I am developing it on the side, while I try to get by with existing solutions.
Not every "feature" is a checkbox. Many features are about how the UI integrates everything and changes over time.
So your "settings" asking the user to design their own app!
That's the developer's job!
What would be an example of a feature that can not be enabled via a checkbox?
Remind me in X number of days.
The setting to use checkboxes.
Which language the settings menu should appear in, for one.
> It could be a blank screen with a "settings" sign in the upper right, where you can enable just the features you need.
That's a "feature" that makes it more annoying for your first time user, which probably puts off a decent proportion of them.
The out-of-the box experience is what most people will use - they will not dive into endless settings and config
(ignoring the insane dev cost of supporting every possible feature combination)
It seems like you're looking for an outliner? Workflowy might fit your needs: https://workflowy.com/
Like others have said, the perfect to-do list is impossible because each person wants wildly different functionality.
My dream to-do list has minimal interaction, with the details handled like I have my own personal secretary. All I'd do is verbally say something like "remind me to do laundry later" and it would do the rest: Categorizing, organizing, prioritizing, scheduling and adding sub-tasks as needed.
I love the idea of automatic sub-tasks created at level which helps with your particular procrastination level. For example "do laundry" would add in "gather clothes, bring to laundry room, separate colors, add to washer, set timer, add to dryer, set timer, get clothes, fold clothes, put away, reschedule in a week (but hide until then). Maybe it's even add in Pomodoro timers to help.
LLMs with reasoning might get us there soon - we've been waiting for Knowledge Navigator like assistants for years.
This is the sort of thing I like trying to make llms do, thanks for the idea. I have a discord bot set up already that sends notifications and accepts notes; I will try adding some endpoints and burning some credits I have to see how hard it is to make AI talk to alarm endpoints in a smart way, etc
I'm one of the creators of Godspeed, which is a fast, 100% keyboard oriented to-do app (though we do support drag and drop as well!). And we've got a web app!
https://godspeedapp.com/
Just tried it, and this is very much the opposite of what I am looking for.
What I would like is a very minimal layout. Basically with nothing on the screen. And I want to be able to organize my world by dragging, dropping, swiping recursive items.
I use Google's Keep but you may need to make your own
I agree that Keep is pretty good. I switched over to Carnet when I setup my nextcloud. It's OK, but not as good as Keep.
Any.do did me well until I adopted a new method of stacking tasks into a routine.
The hard part is altering the routine.
Just tried it. Way too much stuff on the screen for my liking. Plus it seems to be not recursive.
Similar to my thoughts about Trello:
https://news.ycombinator.com/item?id=43068867
I find Trello adequate.
One issue I have with Trello is that it has multiple types of items. And that it is not recursive.
When I create an item "Supermarket" and then an item "Bread", I cannot drag and drop the item "Bread" into "Supermarket". But that is how I think. I have a lot of "items" and each item can contain other "items". I don't want any other type of object.
Another problem is that I cannot customize the layout. I can't remove every icon from the items in the list. I only want to see the item names, no other info like the icon that shows that there is a description or anything. But Trello seems to not support that.
I would love to have a To-Do app that is fluid for both one-off tasks and periodic checklists (daily/weekly/monthly/etc.) Most importantly, I want it to yell at me to actually do it. I was pretty surprised that basically nothing seems to fit the bill and even what existing "GTD" type apps could do felt cumbersome and limited.
I share a lot of thoughts with that, and built my own calculator, too A calculator that gives right instead of wrong answers
https://chachatelier.fr/chalk/chalk-home.php
I tried to explain what was going on https://chachatelier.fr/chalk/article/chalk.html, but it's not a very popular topic :-)
Looks really well done, nice!
There’s a pleasantly elegant “hey, we’ve solved the practical functional complement to this category of problems over here, so let’s just split the general actual user problem structurally” vibe to this journey.
It often pays off to revisit what the actual “why” is behind the work that you’re doing, and this story is a delightful example.
I wrote an arbitrary precision arithmetic C++ library back in the 90’s. We used it to compute key pairs for our then new elliptic-curve based software authentication/authorization system. I think the full cracks of the software were available in less than two weeks, but it was definitely a fun aside and waaaay too strong of a solution to a specific problem. I was young and stupid… now I’m old and stupid, so I’d just find an existing tool chain to solve the problem.
I use qalculate, it behaves well enough for my needs.
https://qalculate.github.io/
All the calculators that I just tried for the article's expression give the wrong answer (HP Prime, TI-36X Pro, some casio thing). Even google's own online calculator gives the wrong answer, which is mildly ironic. [https://www.google.com/search?q=1e101%2B1-1e101&oq=1e101%2B1]
I played around with the macOS calculator and discovered that the dividing line seems to be at 1e33. I.e. 1e33+1-1e33 gives the correct answer of 1 but 1e34+1-1e34 gives 0. Not sure what to make of that.
> the dividing line seems to be at 1e33.. Not sure what to make of that
That’s not too bad. They are probably using hand-rolled FP128 format for their numbers. If they were using hardware-provided FP64 arithmetic, the threshold would have been 2^53 ≈ 9E+15: https://en.wikipedia.org/wiki/Double-precision_floating-poin...
Tried with the HP Prime and it gave the precise 1 for the test. One need to put it in the CAS mode and use the exact form of 10^100 instead of 1E100. You shall get the right answer if the calculator is instructed to use its very powerful CAS engine.
I enjoyed the article, but it seems Apple has since improved their calculator app slightly. The first example is giving me the correct result today. However, the second example with the “Underflow” result is still occurring.
I've just tried the first example on iOS 18.3.1 and it absolutely reproduces perfectly for me.
Oh no- I stand corrected. I tried it again and you are right. I had just woken up when I did my initial test and must have typoed something. I can no longer edit or delete my original comment :(
I remember hearing stories that for a time there was no engineer inside Apple responsible for the iOS Calculator.
Now it seems to be revived as there were some updates to it, but those also removed one of my favourite features -> tapping equals button no longer repeats the last operation.
They fortunately fixed the repeating feature in iOS 18.3. Though it does seem a bit ridiculous that something like this is tied to the entire OS version.
Oh, nice - thanks! So it was a bug afterall.
That's just a single number to calculate.
The real fun begins when you do geometry.
Find a representation of finite memory to represent points, which allows exact addition, multiplication and rotation between them, (with all the nice standard math property like associativity and commutativity).
For example your representation should be able to take a 2d point A, aka two coordinates, and rotate it around the origin by an angle theta to obtain the point B. Take the original point and rotate it by pi + theta, then reflect it around the origin to obtain the point C. Now answer the question whether B is coincident with C.
Just use Mathematica.
This seems so elementary that I think open source computer algebra systems can do it.The point underlying the problem is about the closure of operations [1]
Typically one would like to be able to calculate things without making error, which accumulates.
The symbolic representation you suggest use a growing memory to represent the point by all the operations which have been applied to it since the origin.
What we would rather do is define a set of operation that are closed for a specific set of points, which allows to accumulate information by doing the computation rather than deferring the computation.
One could for example think of using fixed point number to represent the coordinates, and define an extra point at the infinity to handle overflow. And then you have some property that you like and some that you like less. For example minimums distance which can define a point uniquely in continuous R^2, are no longer unique when you constrain yourself to integer grids by using fixed points.
Or you could use some rational numbers to store the coordinates like in CGAL (which allows you to know on which sides of the planes you are without z-fighting), but they still require growing memory. You can maybe add some rule to handle the underflow and overflows.
Or you can merge close points, but maybe you lose some information.
Or you can define the operations on lattices, finite automaton, or do some error correcting codes, dynamic recombining graphs (aka the ruliad).
It's an open problem, see https://en.wikipedia.org/wiki/Robust_geometric_computation for more.
[1] https://en.wikipedia.org/wiki/Closure_(mathematics)
I solved this by making a calculator that rounds to the nearest 1/16. Tiny app for Americans doing DIY work around the house:
https://thomaspark.co/projects/calc-16/
years ago the daily wtf had a challenge for writing the worst calculator app. my submission maintained calculation state by emitting it's own source code, recompiling and running the new executable.
I first learned to program on a Wang 2200 computer with 8KB of RAM, back in 1978. One of the math teachers stayed an hour late most days to allow us nerds to come in an use the two computers. There were more people than computers, so often you'd only get 10 or 15 minutes of time.
Anyway, I wrote a program where you could enter an equation and it would draw an ASCII graph of the curve. I didn't know how to parse expressions and even if I had I knew it would be slow. The machine had a cassette tape under computer control for storing and loading programs. What I did was to take the expression typed by the user and convert each one into its tokenized form and write it out to tape. The program would then load that just created overlay which contained something like "1000 DEF FNY(X)=X^2-5" and a FOR loop would sweep X over the designated range, and have "LET Y=FNY(X)" to evaluate the expression for me.
As a result, after entering the equation, it would take about five seconds to write out the overlay, rewind a couple blocks, and load the overlay before it would start to plot. But once it started it went pretty fast.
That's a really cool and simple solution to a difficult problem!! I love it!
Hey! A fellow Wang 2200 veteran!
Check out wang2200.org if you don't know about it. There is an emulator that runs on windows and osx, lots of scanned documents, many disk images, and some technical details on the microarchitecture of the various 2200 CPUs (they didn't use a microprocessor -- they are all boards and boards of TTL components, until they finally but everything on a single ASIC in the 80s).
People call that a JIT compiler nowadays (?)
Interesting article, and kudos to Boehm for going the extra mile(s), but it seems like overkill to me.
I wouldn't expect, or use, a calculator for any calculation requiring more accuracy than the number of digits it can display. I'm OK with with iPhone's 10^100 + 1 = 1e100.
If I really needed something better, I'd try Wolfram Alpha.
The thing about this calculator app is that it can display any number of digits just by scrolling the display field. The UX is "any number of digits the user wants" not some predetermined fixed number of digits.
I commented about this on X.
As a developer, "infinite scroll to get more digits" sounds really cool. It sounds conceptually similar to lazily-evaluated sequences in languages like Clojure and Haskell (where you can have a 'virtually-infinite' list or array -- basically a function -- and can access arbitrarily large indices).
As a user, it sounds like an annoying interface. On the rare case I want to compute e^(-10000), I do not want to scroll for 3 minutes through screens filled with 0s to find the significant digits.
Furthermore, it's not very usable. A key question in this scenario would be: how many zeroes were there?
It's basically impossible to tell with this UI. A better approach is simply to switch to scientific notation for very large or very small numbers, and leave decimal expansion as an extra option for users who need it. (Roughly similar to what Wolfram Alpha gives you for certain expressions.)
This is interesting.
One of the first ideas I had for an app was a calculator that represented digits like shown in the article but allowed you to write them with variables and toggle between symbolic and actual responses.
A use case would be: in a spreadsheet like interface you could verify if the operations produced the final equation you were modeling in order to help validate if the number was correct or not. I had a TI-89 that could do something close and even in 2006 that was not exactly brand new tech. I figured surely some open source library available on the desktop must get me close. I was wildly wrong. I stuck with programming but abandoned the calculator idea. Even nearly 20 years later, such a task doesn’t seem that much easier to me.
That's a CAS, as mentioned. There are plenty of open source libraries available, but one that specifically implements the algorithms discussed in this article is flintlib. Here's an example from their docs showing exactly what you want: https://flintlib.org/doc/examples_calcium.html#examples-calc...
Thanks!
Isn't what you are asking for a CAS?
Perhaps amusingly, the implementation referenced in Boehm's paper is a still-unmerged Android platform CL adding tests using this approach: https://android-review.googlesource.com/c/platform/art/+/101...
Great resource! For my calculator I also wanted to tackle physics, which expands on the number definition with measurement error size.
It is suprisingly hard problem.
https://recomputer.github.io/
yours give 1 on (10^100)+1-(10^100), cool
At the risk of coming across as being a spoilsport, I think when someone says "anyone can write a calculator app", they just mean an app that simulates a pocket calculator (which is indeed pretty easy) as opposed to one which always gives precisely the right answer (which is indeed impossible). Also, you can avoid the most embarrassing errors just by rearranging the terms to do cancellation where possible, e.g. sqrt(2) * 3 * sqrt(2) is absolutely precisely 6, not 6 within some degree of approximation.
Pocket calculators are not using 32 bit floating point math.
In fact, even cheap no-name calculators are using BCD (binary-coded decimal) instead of "normal" representation of numbers.
> as opposed to one which always gives precisely the right answer (which is indeed impossible)
Per the article, it's completely possible. Frankly I'd say they found the obvious solution, the one that any decent programmer would find for that problem.
> Frankly I'd say they found the obvious solution, the one that any decent programmer would find for that problem.
That statement seems to belittle the amount of effort and thought described in the article. And wildly contradicts my experience.
It... really doesn't seem like a lot of effort and thought. I feel like anyone who's implemented a command algebra for anything is already halfway there.
I don't know what a command algebra is, for example.
> It's too slow.
> 1 is not equal to 1 - e^(-e^1000). But for Richardson and Fitch's algorithm to detect that, it would require more steps than there are atoms in the universe.
> They needed something faster.
I'm disappointed after this paragraph I expected a better algorithm and instead they decided to give up. Fredrik Johansson in his paper "Calcium: computing in exact real and complex fields" gives a partial algorithm for the problem and writes "Algorithm 2 is inspired by Richardson’s algorithm, but incomplete: it will find logarithmic and exponential relations, but only if the extension tower is flattened (in other words, we must avoid extensions such as e^log(z) or √z^2), and it does not handle all algebraic functions. Much like the Risch algorithm, Richardson’s algorithm has apparently never been implemented fully. We presume that Mathematica and Maple use similar heuristics to ours, but the details are not documented [6], and we do not know to what extent True/False answers are backed up by a rigorous certification in those system".
I use python repl as my primary calculator on my computer.
1. I don't have problems like the IOS problem documented here. This requires me to know the difference between an int and a float, but pythons ints have unbounded precision(except if you overflow your entire memory), so that kind of precision loss isn't a big deal. 2. History is a lot better. Being able to scroll back seems like a thing calculators ought to offer you, but they don't. 3. In the 1-in-a-hundred times I need to repeat operations on the calculator, hey, we've already got loops, this is python 4. Every math feature in the windows default calculator is available in the math library. 5. As bad as python's performance reputation is, it's not at all going to be noticeable for simple math.
I was always a little envious of the people that could use bc because they knew how. I know python and its installed on linuxes by default, so now I am no longer envious.
> Obviously we'll want a symbolic representation for the real number 1
Sorry, why is this obvious? A basic int type can store the value of 1, let alone the more complicated Rational (BigNum/BigNum) type they have. I can absolutely see why you want symbolic representations for pi, e, i, trig functions, etc., but why one?!
I think the issue was that they are representing a real as a product of a rational and that more complicated type, so without a symbolic representation for 1, when representing and rational, they would have to multiply it by a RRA representation of 1 which brings in all the decision problem issues.
Yep, this is exactly it!
Sorry for being unclear about this. A number is being expressed as a rational times a real. In the case where the rational is exactly the number we want, we want to be able to set the real to 1, so the multiplication has no effect
Ahhh OK so it's essentially the null or identity value, saying "you don't need this component for an exact representation". That makes sense, thanks.
Because they express numbers as rational times a real, so the real in all those cases would be one. When it’s one, you do rational math as normal without involving reals.
Off topic, but I believe naming this specific kind of numbers "real" is a misnomer. Nothing in reality is expression of a real number. Real numbers pop up only when we abstract reality into mathematical models.
In Polish language rational numbers are called something more like "measurable" numbers and in my opinion that's the last kind of numbers that is expressed in reality in any way. Those should be called "real" and real should be called something like "abstract" or "limiting" because they pop-up first as limits of some process working on rational numbers for infinite number of steps.
> This means a calculator built on floating point numbers is like a house built on sand.
I've taken multiple numerical analysis courses, including at the graduate level.
The only thing I've learnt was: be afraid, very afraid.
I really hate when people put cat images and memes in a serious article.
Don't get me wrong, the content is good and informative. But I just hate the format.
That reminds me when SideFX started putting memes into their official tutorial youtube channel. At least this is just a webpage and we can scroll through them...
While we're already breaking the HN guidelines—"Please don't complain about tangential annoyances—e.g. article or website formats"—let me just say that the scrolljacking on this article is awful.
I've not intentionally implemented any scrolljacking (I'm using the default obsidian template), but I'm curious what you mean as I also don't see where the scrolljacking would happen. Could you elaborate on the way in which the user experience is awful now, so I can improve it?
onScroll in https://publish.obsidian.md/app.js?4bb6aa9a821f975db2a1
Try the page down key.
this is fixed now!
It's not.
page down should work now, maybe you need to hard refresh?
Define "should work". It's still scrolljacking. I don't get my native browser smooth scrolling down a page.
What browser are you using? Can you describe the issue? Typically scroll jacking is when you hook on scroll to forcefully scroll the page to something, but that's not happening here.
> What browser are you using?
Safari
> Typically scroll jacking is when you hook on scroll to forcefully scroll the page to something, but that's not happening here.
That's literally what's happening here. Open the web inspector, and set a breakpoint on the scroll event.
Bah, cats have a place in programming articles, regardless of seriousness
Also the “highschool poem”-type writing style is quite jarring but forgiven when he acknowledged it at the end of the article.
> Also I decided to try writing this thread in the style of a linkedin influencer lol, sorry about that.
This was not intended to be a serious article, like something you'd submit for publication in an ACM journal.
The last sentence is: "(Also I decided to try writing this thread in the style of a linkedin influencer lol, sorry about that.)"
The tone of the article has given away the fact that the article is not serious. At least not the way it's presented. You want something serious? Go read the pdf.
And I don't mind at all. Without this article, I probably will never know what's in the paper and how they iterated. I'll likely give up after reading the abstract -- "oh, they solved a problem". But this article actually makes much more motivating to read the original paper, which I plan to do now.
I'm happy to have spread the good word! Note that when you read the paper, some implementation details are slightly different than my description. For instance, they always store the recursive real form of the real part of each number, even when the symbolic part perfectly describes it. I removed this redundancy to try to simplify it for twitter, but I hope it doesn't confuse those who go on to read the paper afterwards.
Cats rule our world. So nothing wrong about it.
The original is a Twitter thread, not a serious article.
Two cat pictures. 0 memes. Lighten up.
Well the cats hate you right back, how dare you. The whole point of the internet is to post cats on it.
Hasn't this been solved in cheap pocket calculators for decades before this?
HP scientific calculators goes back to the 60's and can presumably add 0.6 to 3 without adding small values to the 20th significant digit.
π+1−π = 1.0000000000000
But
π−π = 0
I think I understand why, from the article, but wouldn't it be "easy" (probably not, but curious about why) to simplify the first expression to (1-1)π + 1 then 0π + 1 and finally just 1 before calculating a result?
that would require an algebraic solver which is definitely possible but more complex than really warranted for a "basic" calculator
Could this be the reason why iPads haven't had a calculator app until recently?
Because it's such a difficult problem to solve that it required elite coders and Masters/PhD level knowledge to even make an attempt?
(Apple Finally Plans To Release a Calculator App for iPad Later This Year)[https://www.macrumors.com/2024/04/23/calculator-app-for-ipad...]
I haven’t really used the iPad’s calculator app, but it looks exactly like a larger version of the iPhone app. So I don’t think there are any technical reasons why it took so long for the iPad to get that app.
Due to backwards compatibility modern PC CPUs have some mathematical constants in hardware, one of them Pi https://www.felixcloutier.com/x86/fld1:fldl2t:fldl2e:fldpi:f... Moreover, that FLDPI instruction delivers 80 bits of precision, i.e. more precise than FP64.
That’s pretty much useless in modern world because the whole x87 FPU is deprecated. Modern compilers are generating SSE1 and SSE2 instructions for floating-point arithmetic, instead of x87.
Hmmm, solved a lot of these problems as well when building my own calculator: https://github.com/crouther/SciCalc
Seems like Apple got lazy with their calculator didn't even realize they had so many flaws... Math Notes is pretty cool though.
As far as I know, windows calculator have a similar approach. It use rational, and switch to Taylor expansion to try to avoid cancellation errors. Microsoft open sourced it some times ago on GitHub
lowkey this is why ieee 754 floating point is both a blessing and a curse, like yeah it’s fast n standardized but also introduces unavoidable precision loss, esp w iterative computations where rounding errors stack up in unpredictable ways. ppl act like increasing precision bits solves everything. but u just push the problem further down, still dealing w truncation, cancellation, etc. (and edge cases where numerical stability breaks down.)
… and this is why interval arithmetic and arbitrary precision methods exist, so it gives guaranteed bounds on error instead of just hoping fp rounding doesn’t mess things up too bad. but obv those come w their own overhead: interval methods can be overly conservative, which leads to unnecessary precision loss, and arbitrary precision is computationally expensive, scaling non-linearly w operand size.
wonder if hybrid approaches could be the move, like symbolic preprocessing to maintain exact forms where possible, then constrained numerical evaluation only when necessary. could optimize tradeoffs dynamically. so we’d keep things efficient while minimizing precision loss in critical operations. esp useful in contexts where precision requirements shift in real time. might even be interesting to explore adaptive precision techniques (where computations start at lower precision but refine iteratively based on error estimates).
This article was really well written. Usually in such articles i understand about 50%, maybe if I'm lucky 70% but this one I've understood nearly everything. It's not much of a smartness thing but an absolute refusal on my part to learn the jargon of programming as well as my severe lack of knowledge of all the big words that are thrown around lol. But really simply written love it
I wrote an OCaml implementation of this paper a few years ago, which I've now extracted into its own [repo](https://github.com/joelburget/constructive-reals/blob/main/C...)
The link in the paper to their Java implementation is now broken: does anyone have a current link?
I thought this was going to be about how Apple completely destroyed the calculator on iOS with the latest update.
Now it does the running ticker tape thing, which means you can't use the AC button to quickly start over, because there is no AC button anymore!
I know it's supposed to be easier/better for the user, but they didn't even give me a way to go back to the old behavior.
Given the function to compute pi:
Wouldn’t that return a value where the error of the result is 4x the requested tolerance?If you accept that Pi and Sqrt(2) will be represented as a terminating series of digits (say, 30), then 99% of the problems stated go away. My HP calculator doesn't represent the square root of 2 as a magic number, it's 1.414213562.
At some point, when I get a spare 5 years (and/or if people start paying for software again), I will start to work on a calculator application. Number system wrangling is quite fun and challenging, and I am hoping to incorporate units as a first-class citizen.
This is really cool, but it does show how Google works. They’ll pay this guy ~$3million a year (assuming stock appreciation) to do this but almost no end user will appreciate it in the calculator app itself.
The bar for "the greatest calculator app development story ever told" it should be noted, is quite high :)
https://github.com/based2/KB/blob/main/math/aaa.md#calculato...
Does anyone know if this was the system used by higher end TI calculators like the TI-92? It had a 'rational' mode for exact answers and I suspect that it used RRA for that.
The TI-92 and similar have a full-on computer algebra system that they use when they're in exact mode [1]. It does symbolic manipulation.
This is different from what the post (and linked paper) discuss, where the result will degrade to recursive real arithmetic, which is correct but only to a bounded level of precision. A CAS will always give a fully-exact (although sometimes very unwieldy) answer.
[1] See page 87 here: https://sites.science.oregonstate.edu/math/home/programs/und...
I doubt that most people using the calc app expect it to handle such situations. It's nice that it does of course but IMO it misses the point that the inputs to a lot of real world calculations are inaccurate to start with.
i.e it's more likely that I've made a few mm mistake when measuring the radius of my table than that I'm not using an precise enough version of Pi. The area of the table will have more error because one is squaring the radius, obviously.
It would be interesting to have a calculator that let you add in your estimated measurement error (or made a few reasonable guesses about it for you) and told you the error in your result e.g. the standard deviation.
I sometimes want to buy stuff at a hardware shop and I think : "how much paint do I need to buy?" I haven't planned so I'm thinking "it's about 4m by 5m...I think?" I try to do a couple of calculations with worst case numbers so I at least get enough paint and save another trip to the shop but not comically too much so that I have a tin of it for the next 5 years.
I remember having to estimate error in results that were calculated from measured values for physics 101 and it was a pain.
This is a crazy take to me. Most people don’t care that a calculator gives them a correct answer? It’s just an estimator?
Why would we not expect it to work when we know how to build ones that do, and people use it as a replacement for math on paper?
Everything in life is estimation. A calculator that tells you the perfect answer in some highly unusual situation probably isn't fixing the most common source of error.
e.g I measure an angle and I am not sure about whether it's 45 degrees or 46 then an answer like this is pointless: 0.7071067811865476
cos of 46 (if I've converted properly to radians) is 0.6946583704589973
so my error is about 0.01 and those long lists of digits imply a precision I don't have.
I think it would be more useful for most people to tell them how much error there is in their results after guessing or letting them assign the estimated error in their inputs.
> Everything in life is estimation
Examples include finance and resource accounting. Mathematical proof (yes sometimes they involve numbers), etc.
Even in engineering and carpentry it’s not true. The design process is idealization, without real world measurements. It’s conceptually useful for precise numbers to sum properly on paper. For example it’s common to divide lengths into fractional amounts which are expected to sum to a whole.
> tell them how much error there is
But once again, we know how to build calculators that do most calculations with 0 error. So why are we planning for an estimation problem we don’t have?
Read the article. yes if you want to put out sqrt(2) in decimal form, it will be an approximate. But you can present it as sqrt(2).
>> tell them how much error there is >But once again, we know how to build calculators that do most calculations with 0 error. So why are we planning for an estimation problem we don’t have?
We have accepted lack of perfection from calculators long ago. I cannot think of a use-case which needs it from anyone I know. Perhaps some limited number of people out there really need a calculator that can do these things but I suspect that if they do there's a great chance they don't know it can handle that sort of issue.
I have more trouble with the positions of the buttons in the UI than with sums that don't work out as expected. The effort to get buttons right seems far less to me.
I can think of useful things I'd like when I'm doing real-world work which this feature doesn't address at all and I wonder why such an emphasis was put on something which isn't really that transformational.
I understand that if you use American units you might be calculating things in fractions of an inch but since I've never had to use those units it's never been necessary to do that sort of calculation. I suppose if that helps someone then yay but I can only sympathise to an extent.
Where I have problems is with things that aren't precise - where the bit of wood that I cut turns out a millimetre too short and ends up being useless.
I really do think we should just use the symbolic systems of math rather than trying to bring natural world numbers into a digital number space. It's this mapping that inherently leads to compensating strategies. I guess this is called an algebraic system like the author mentioned.
But I view math as more of a string manipulation function with position-dependent mapping behavior per character and dependency graphs, combined with several special functions that form the universal constants.
Just because data is stored in digitization as 1 and O, don't forget it's more like charged and not charged. Computers are not numeric systems, they are binary systems. Not the same thing.
I really wonder what the business case for spending so much effort on such precision was. Who are the users who need such accuracy but are using android calculator?
Students learning about real numbers. Yes seriously.
Unlike software engineers who have already studied IEEE754 numbers, you can't expect a middle school student to know concepts like catastrophic cancellation. But a middle school student wants to poke around with trigonometric functions and pi to study their properties, but a true computer algebra system might not be available to them. They might not understand that a random calculator app doesn't behave correctly because it's not using the same kind of numbers discussed in their math class.
while phones are mostly a circus people do try to use them for serious things. For a program you make the calculations as accurate as the application requires. If you don't know what a tool will be used for you never really get to feel satisfied.
Really interesting article. I noticed that my Android calculator app could display irrational numbers like PI to an impressive amount of digits, if I hold it sideways.
You can also scroll it to make it display more digits.
How does an "old school" physical calculator handle the floating point precision problem?
This reminds me of solving project Euler problems that intentionally not possible to solve with simple float representation of numbers
No it is easy, you just throw up a math error whenever a smartass tries something like this. Like calculators do.
There's a point at which you're really building a computer algebra system.
Or you can do what the Windows 11 calculator does and not even get 1+2*3 right.
Are you in standard or scientific? Each new operator (Not sure if thats the correct term) is calculated immediately. ie 1+2x3 is worked out as 1+2 (Stored into buffer as 3) x 3 = 9
But scientific does it correctly where it just appends the new expression onto the buffer instead of applying it
I'm on windows 11. I just did it and it replied "7". I subtracted 7 to see if there was some epsilon error but it reported "0". What do you experience?
Why does the author insist on using a dick bar? With the "contact me" portion, it takes up 30% of my screen on an iPhone SE.
There should have been an "x" on the right of the "contact me" portion that you could click to make it go away. Sounds like it didn't show up for you, so sorry about that. Unfortunately I don't have an iPhone SE to test against and the "x" does seem to show up on the iPhone SE screen-size simulator in Chrome. This means I don't know how to reproduce the issue and probably won't be able to resolve it without removing the "contact me" page entirely, which I'm not willing to do right now.
A what?
I'm also reading this on an sPhone but don't remember seeing anything that looked like, well... what you said
I just tried this in raku App::Crag...
crag 'say (10*100) + 1 − (10*100)' #1
Raku uses Rats by default (Rational numbers) unless you ask for floating point.
Anyone know of a comparison of the linked paper's algorithm to how Gavin Howard's 'bc' CLI calculator does it?
So, 'bc' just has the (big) rationals. Rationals are the numbers you could make by taking one integer (say 5 or minus sixteen trillion and fifty-one) and dividing it by some positive integer (such as three or sixty-two thousand)
If we have a "Big Integer" type which can represent arbitrarily huge integers, such 10 to the power 5000, we can use two of these to make a Big Rational, and so that's what bc has.
But the rationals aren't enough for all the features on your calculator. What's the square root of ten ? How about the square root of 40 ? Now, multiply those together. The correct answer is 20. Not 20.00000000000000001 but exactly 20.
Ahh, thank you. Yes indeed: bc says:
Android calculator, on the other hand, gets this one right.Gavin Howard here.
Yes, GP is entirely correct. I want to do something like the article, but the bc standard (POSIX) requires a decimal BigInteger representation.
I am glad you like my bc!
I actually use bc a lot and the fact it's just the big rationals was annoying which is why I set off on the route that ended with my crate `realistic`
Amusingly one of the things I liked in bc was that I could write stuff like sqrt(10) * sqrt(40) and it works -- but even the more-bc-like command line toy for my own use doesn't do this, turns out a few months of writing the guts of a computable reals implementation makes (* (sqrt 10) (sqrt 40)) seem like a completely reasonable way to write what I meant and so "Make it work like bc" faded from "Important" to "Eh, whatever I'll get to it later".
If you'd asked me a year ago if "fix edge case bugs in converting realistic::Real to f64" would happen before "Have natural expressions like 1 + 2 * 3 do what is expected" I'd have said not a chance, but shows how much I knew.
> Showing 0.0000000000000 on the screen, when the answer is exactly 0, would be a horrible user experience.
> They realized that it's not the end of the world if they show "0.000000..." in a case where the answer is exactly 0
so... devs self-made a requirement, got into trouble (complexity) - removed the requirement, trouble didn't go anywhere
just keep saying "it's a win" and you'll be winning, I guess
No? They made a goal to show 0.0000 in as few places as possible, and they got as close to it as they could without compromising their other requirements.
Was given the task to build a simple calculator app as a project for a Java class I took in college.
No parens or anything like that, nothing nearly so fancy. Classic desk calculator where you set the infix operation to apply to the previous value, followed by the second value of the operation.
It was frankly an unexpected challenge. There's a lot more to it than meets the eye.
I only got as far as rational numbers though. PI accurate to the 8 digit display was good enough for me.
Honestly though, I think it was a great exercise for students, showing how seemingly simple tasks can actually be more complex than they seem. I'm still here thinking about it some twenty years later.
Saw the thread on Twitter. Kudos to the author for going in so much detail!
Not a calculator engineer but this seems hideously complex?
Maybe, though in the paper (not the article):
> We no longer receive bug reports about inaccurate results, as we occasionally did for the 2014 floating-point-based calculator
(with a footnote: This excludes reports from one or two bugs that have now been fixed for many months. Unfortunately, we continue to receive complaints about incorrect results, mostly for two reasons. Users often do not understand the difference between degrees and radians. Second, there is no standard way to parse calculator expressions. 1 + 10% is 0.11. 10% is 0.1. What’s 10% + 10%?)
When you have 3 billion users, I can imagine that getting rid of bugs that only affect 0.001% of your userbase is still worthwhile and probably pays for itself in reduced support costs.
I’m confused. Why would 1 + 10% obviously be 0.11?
I expected 1.1 (which is what my iOS calculator reported, when I got curious).
I do understand the question of parsing. I just struggle to understand why the first one is confidently stated to correctly result in a particular answer. It feels like a perfect example itself of a problem with unclear parsing.
> 1 + 10% is 0.11.
I know adding % has multiple conventions, but this one seems odd, I'd interpret 1 + 10% as "one plus 10 percent of one" which is 1.1, or as 1 + 10 / 100 which happens to be also 1.1 here
The only interpretation that'd make it 0.11 is if it represents 1% + 10%, but then the question of 10% + 10% is answered: 0.2 or 20%. Or maybe there's a typo and it was supposed to say "0.1 + 10%"
1 + 10% could parse like the following:
(1+10)%
Which is 11% or 0.11
I think a big issue with how we teach math, is the casualness with which we introduce children to floating points.
Its like: Hey little Bobby, now that you can count here are the ints and multiplication/division. For the rest of your life there will be things to learn about them and their algebra.
Tomorrow we'll learn how to put a ".25" behind it. Nothing serious. Just adds multiple different types of infinities with profound impact on exactness and computability, which you have yet to learn about. But it lets you write 1/4 without a fraction which means its simple!
Totally agree. It bothered me when I was younger, though I had no idea how to explain why, but this should be deeply unsettling to everyone who encounters it:
Oh that number? It’s just a Laurent series. Just take the limit of the partial sums.
> For the rest of your life there will be things to learn about them and their algebra.
That’s just not true for the vast majority of people.
It's available to learn whether or not they take advantage of it.
Sure. Just like open heart surgery, Medieval English, and penguin husbandry.
There is no floating point here.
Real numbers are quite complex (no pun). Understanding the material well is a junior level math major course.
If you really understand the existing math curriculum this should be high school level.
For an everyday use calculator? Sure. It's still fun and challenging to create a calculator that can handle "as much" math/arithmetics as possible.
It’s really not just for experts though. Even dealing with fractions is going to require more than a naive implementation.
nice story, building calculator at that time was a tough task today building calculator is just a prompt away inspiring
if "answer" overflows, switch to symbolic mode.
not that simple, 1/3 + 5 - 1/3 should be 5. It doesn't overflow in IEEE754
That does appear to equal exactly 5... would you care to show how it doesn't?
Okay..that was just an example (and a false one apparently).
Its easy enough to find an example where your typical FP operations doesn't work out.
https://godbolt.org/z/Mr4Ez8xz1
Yes, anyone can make a calculator.
I don't care if it gives me "Underflow" for bs like e^-1000, just give me a text field that will be calculated into result that's represented in the way I want (sci notation, hex, binary, ascii etc whatever).
All standard calculators are imitations of a desktop calculator, It's insane that we're still dragging this UI into desktop. Why don't we use rotary dial on mobile phones then?
It's great that at least OSX have cmd+space where I can type an expression and get a quick result.
And yes, I did develop my own calculator, and happily used it for many years.
TLDR: the real problem of calculators is their UI, not arithmetic core.
So did they fix the iOS Calculator bug?
On another note. Since Calculator is so complex are there any open source cross platform library that makes it easier to implement?
I imagine you could do most or all of this with yacas, which is actually a computer algebra system (GPL).
From the linked post: 'A "computer algebra system" would have accomplished a similar goal, but been much slower and much more complicated'
Writing a CAS from scratch would've been much more complicated.
Reusing an existing one? Maybe not.
Yes, it would likely be slower, but is a 1ms vs. 10ms response time in the calculator app really such a big deal? entering a correct calculation / formula on the smartphone likely takes much longer.
He was not working on the iOS Calculator.
Slightly disappointing: The calculator embedded in Google's search page also gives the wrong answer (0) for (10^100) + 1 − (10^100). So apparently they don't use the insights they gained from their Android calculator.
Duckduckgo and (apt install) qalc do it correctly fwiw
The windows calculator produces "1.e+100". Whatever that's supposed to mean.
And yet Android's calculator is quite bad. Despite being able to correctly calculate stuff that 99.99% of the population don't care about, it lacks many scientific operations that a good chunk of accountants, engineers and coders would make use of regularly. This is a classic situation of engineers solving the fun/challenging problems before the customer's actual problems.
What exactly is missing? https://imgur.com/a/q0yevdW
I removed telemetry on my Win10 system and now calc.exe crashes on basic calculations. I've reported this but nobody cares because the next step in troubleshooting is to reinstall Windows. So if telemetry fails, calc.exe will silently explode. Therefore no, anyone cannot make it.
Won't fix: https://github.com/microsoft/calculator/issues/148
> Won't fix: https://github.com/microsoft/calculator/issues/148
I don't see how one can expect them to take a report worded this way seriously. Perhaps if they actually reported the crash without the tantrum the team would fix it.
So they send telemetry that shows what people are calculating.
Does it mean that there are some "dangerous" numbers that can be used to flag someone?
I don't know if you've already heard of illegal numbers; otherwise, you're one of today's lucky ten thousand!
https://en.wikipedia.org/wiki/Illegal_number
That one I know (ah the days digg died...) but it seems that there are some others? That can be calculated on a windows calculator?
I did begin poking at the crash for security issues but gave up after 10 minutes
Based on the thread, you can build it from the source code and telemetry won’t be enabled…
I just use the Win7 binary. Calc.exe is the definition of something that really doesn't need to change.
Windows XP's mspaint.exe stopped working at some point :(. I was also in the team "simple tool worked as I want it to" for as long as that lasted. (I don't use Windows anymore, not for only this reason obviously but still, I don't seem to have these problems anymore where you can't make things work a certain way.)
How do you make the Win7 binary run? Last I tried it doesn’t run if you just have the .exe?
https://win7games.com/#calc
Could have just used an off-the-shelf CAS.
The point of the article is to teach you how calculators work. Not find a piece of software to unblock you.
You may well find yourself in the field of computing having to compute something!
The point of the article is to show building a calculator requires a CAS, which should have been obvious to anyone with a basic understanding of how a calculator works.
The premise of the article is itself somewhat bogus, but I suppose there are programmers today who never had to work with a graphing calculator.
While RRA is an interesting approach, ultimately it wasn't sufficient.
Re-using an off-the-shelf CAS would have been the more practical solution, avoiding all the extra R&D on a novel number representation that wasn't quite sufficient to do the job.
"Over the past year or so, I've reluctantly come to the conclusion I need to leave Elm and migrate to some other MUA like PINE or mutt..."
lol I ran into this when making a calculator program because Google's calculator didn't do certain operations (such as adding clock time results like 1:23+1:54) and also because Google occasionally accuses me of being a bot when I search for too many equations.
Maybe I'll get back to the project and finish it this year.
cool story. All programming students should be made to create a calculator in school and have them truly understand the issue at hand.
Good read, thanks.
recursive descent parser, normally any decent developer can do that, my version: https://caub.github.io/misc/calculator
"-2 ** 3 SyntaxError: unparenthesized unary expression can't appear on the left-hand side of '*' "
That's actually a great error, I have made the mistake of expecting "-2 ** 2" would output 4 instead of -4 before.
^ fyi, this comment reveals you didn't RTFA
Correct, but my goal was just to get the same result than JS `eval()`except for -n * m because in my opinion this shouln't require parenthesis. It's still a good learn to do this, I don't want to deal with floating points things etc..
Re: (10^100)+1-(10^100)
i) Answer is 0 if you cancel out two expression (10^100)
ii) Answer is 1 if you compute 10^100 and then add 1 which is insignificant.
How do you even cater for these scenarios? This needs more than arithmetic.
Uh, what do you mean? The answer is very obviously 1 no matter what.
Obviously it is 1.
But try it on iOS calculator, answer is 0.
Reason is when computing large numbers e.g. 100000........n + 1 - 100000........n, addition of 1 is pretty in-significant.
Yes, if you use limited-precision data types. But you have it the wrong way, if you first cancel out the $BIGNUM (ie. reorder to $BIGNUM - $BIGNUM + 1) the answer is 1; if you first evaluate $BIGNUM+1, the answer is 0 because $BIGNUM+1 has no representation distinct from $BIGNUM. Limited-precision arithmetic is not, in general, associative. Still arithmetic, though, just not in the ring of integers. But the whole point of the article was that it's, of course, possible to do better and get exact results.
>The answer is very obviously 1 no matter what
no, because only in our imaginations and in no place in the universe can we ignore significance of measurements. If we are sending a spaceship to an interstellar object 1 light year away from earth, and the spaceship is currently 25 miles from earth (on the way), you are insisting that you know more about the distance from earth to the object than you do if you think that that distance from the spaceship to the galaxy is 587862819274.1 miles
You are discussing physics. Everyone else in this thread is discussing mathematics. Sorry but you are the one who's off topic.
why would a computer make the mistake that EVERYONE HERE CAN'T GROK?
for the reasons in my comment, and in, according to you, nobody else's.
also, the commment I was replying to said "1 no matter what" and I was pointing out where it would matter what.
Interesting article but that feels like wasted effort for what is probably the most bare-bones calculator app out there. The Android calc app has the 4 operations, sin cos tan ^ ln log √ ! And that's it. I think most people serious about calculator usage either have a physical one or use another more featureful app and the others don't need such precision.
It's not wasted effort at all, as this app comes installed by default for over a billion users. Only a tiny fraction will ever install another calculator app, so the default one better work entirely correctly. When you have that many users it's hard to waste effort on making the product better.
Nah yea you can over engineer anything.
In the year he did this he easily could have just done some minor interface tweaks to a ruby repl which includes the BigDecimal library. In fact I bet this post to an AI could result in such a numerically accurate calculator app. maybe as a Sinatra single file ruby web app designed to format to phone resolutions natively.