Posts: 2
Threads: 1
Joined: Jun 2010
07/20/2010, 05:51 PM
(This post was last modified: 07/20/2010, 05:55 PM by brangelito.)
Isn't zeration a unary operation rather than a binary one?
Edit:
Just like I'd call multiplication a repetition of addition, I'd call addition a repetition of incrementation.
So the operator would be:
z(x)=x+1
Posts: 1,395
Threads: 91
Joined: Aug 2007
(07/20/2010, 05:51 PM)brangelito Wrote: Isn't zeration a unary operation rather than a binary one?
Sure, but the ladder consists only of binary operations, so zeration has one fake argument.
Posts: 568
Threads: 95
Joined: Dec 2010
11/09/2011, 01:40 AM
(This post was last modified: 11/09/2011, 01:47 AM by JmsNxn.)
This is a rather disturbing flaw that I've come across with zeration that rather surprises me the authors didn't consider.
By the Ackermann function:
which is the core of how the function is defined.
I'm gonna put it right out there that zeration DOES NOT satisfy this property.
by definition:
or
Take the obvious example, a > 0:
however, by Ackermann's law (I):
Did anybody else notice this? How could they over look such a fatal flaw? I mean it took me fifteen minutes of research to see this?
It can actually be extended generally to all negative numbers
however by (I) we know it must be
Clearly Zeration is not the operator below addition in the Ackermann function.
Posts: 22
Threads: 1
Joined: Feb 2008
(11/09/2011, 01:40 AM)JmsNxn Wrote: This is a rather disturbing flaw that I've come across with zeration that rather surprises me the authors didn't consider.
By the Ackermann function:
[...] This is only a problem if you consider the Ackermann function to be the standard for defining operations. The usual definition of the Ackermann function (not the original one) has:
A(0,n) = n+1
A(m,n) = 2 Δ{m2} (n+3)  3
where Δ{x} denotes the x'th operation. Notice that the operations are slightly off from the "standard" basic operations; this is usually not considered a problem because we're considering asymptotic behaviour here.
The motivation behind zeration is an operation that when repeated n times amounts to addition by n. So m zerated to itself n times should equal to m+n. Clearly, zeration should be somehow linked to the increment operator, since that's the only thing that can sanely give you addition by n when repeated n times. So m zerated to itself should equal (m+1). However, this tells us nothing about what happens if m is zerated to a number not equal to itself. So should m zerated to k be equal to (m+1) or (k+1)? It makes little sense to always choose one or the other, because then the operator is essentially unary and the other argument is always ignored. This is the reason the definition was chosen to be max(m,k)+1, since, "intuitively" speaking, the largest argument to the operation should dominate in the computation of the result. That this definition doesn't 100% fit into one of the definitions of the Ackermann function isn't really a problem, since the asymptotic behaviour of the ntimescomposed function is the same. Besides, having the max operation come out as part of the basic operations hierarchy is nice, because it is quite frequently used in math.
Posts: 568
Threads: 95
Joined: Dec 2010
Okay, alright, that seems reasonable. I just found it misleading that they would write that formula and state that zeration satisfies this property.
I guess it should just be acknowledged that if we want to analytically continue the Ackermann function, zeration doesn't come into play.
Posts: 22
Threads: 1
Joined: Feb 2008
(11/10/2011, 01:20 AM)JmsNxn Wrote: [...]I guess it should just be acknowledged that if we want to analytically continue the Ackermann function, zeration doesn't come into play. On the contrary, if we ever manage to analytically continue the Ackermann function, I'd be very curious to find out what it does at the zeration level, if it's possible to continue it there. It may or may not match the max(a,b)+1 formula (I'm guessing it probably won't). The chances of this happening in our lifetime is slim, though... we have enough trouble already deciding which analytic continuation of tetration should be canonical, as this forum proves. To analytically continue over the Ackermann function would seem to require a continuation based on the canonical continuations of individual operations in the Grzegorczyk hierarchy. So if we can't even decide what is canonical, we aren't even ready to generalize across operations yet.
Posts: 3
Threads: 1
Joined: Feb 2012
(04/03/2008, 02:25 PM)GFR Wrote: Pillar 3  The Hyperroots. It is known, from the Ancient Greeks' times, that the square root of a number can be calculated by iterating the following functional equation:
y = sqrt x > (y + x/y) / 2 => y
Iteration (n + x/n) / 2 = m > n, starting from an approximate solution n, rapidly converges to the square root of x. About 20 years ago, Konstantin Rubtsov, thought to apply a similar formulation for calculating the square superroot, as well as the half of a number (!!), both leftinverse hyperops, of the root type. The compact formulation of that can be generalized as follows:
y = x /[s]2 > y <= (y[s1](y[s]\ x)) /[s1]2.
This formula can be implementes as follows:
.....
y = ssqrt x > y <= sqrt (y * log_y(x))
y = sqrt x > y <= (y + x/y) / 2
y = x / 2 > y <= (y ° (xy))  2
I experimented with the formulas a bit and I noticed the following formula for calculating the superroot works just as well if not better:
y = ssqrt x > y <= (y + log_y(x)) / 2
I prefer this formula because division is simpler than taking the square root. The difference between the formulas is that yours takes the geometric mean of y and log_y(x) while mine takes the arithmetic mean, just like with the square root formula.
Your division formula uses the "zeric mean" however, which is far slower than the others if you start far off and doesn't work for nonintegers. It can't really be compared well to the other kinds of means, I think.
Posts: 100
Threads: 12
Joined: Apr 2011
03/20/2015, 06:11 AM
(This post was last modified: 03/20/2015, 07:15 AM by marraco.)
(02/19/2008, 12:24 PM)bo198214 Wrote: the bracketing must be to the right. So
ao(ao(aoa))=a+4
ao(aoa)=a+3
aoa=a+2
a = a+1 ???
The answer to that question comes clear if zeration is defined this way:
Neutral element:
This is similar to addition:
Neutral element:
Also is similar to product:
Neutral element:
And is similar to exponentiation
Neutral element: where I conjecture that "?" is the inverse function of tetration, and is not slog.
Note that the neutral elements are all related with the inverse of the higher ranked operation.
I think that it is a very important clue.
But why I choose as the neutral element of zeration?
Because it is
Because
Because this sequence:
I conjecture that zeration has a periodic component, so it can just add 1 no matter how high are his variables, and also
ln(a+b) could be equal to ln(a) ° ln(b)
ln(a+b)=ln(a)+ln(1+b/a)
if a>b => 0 < ln(1+b/a) < ln(2)
If a=b => ln(1+b/a) < ln(2)
this is suspiciously like zeration, adding a small number to the larger number, or adding 2 if a=b
Posts: 100
Threads: 12
Joined: Apr 2011
Posts: 100
Threads: 12
Joined: Apr 2011
(02/21/2008, 11:52 PM)quickfur Wrote: The interesting thing about this, is that if we then construct inverse elements w.r.t. to #, then we must admit new numbers that lie "before" . This seems quite reminiscient of how constructing the inverse of addition created the negative numbers, the inverse of multiplication created the rational numbers and numbers like lie beyond the neutral number of product
(02/21/2008, 11:52 PM)quickfur Wrote: , and the (radical) inverse of exponentiation created the real numbers (due to such constructs as ). Combining the (radical) inverse of exponentiation with the negative numbers gave us the complex numbers. One can hardly wonder that the inverse of zeration would yield new numbers too. (It makes one wonder if the inverse of tetration would also create new numbers... I suspect it must've come up in this forum before, right?) should have a special meaning, and also , , and
