It’s no secret that the upcoming Helio X20 is going to be a bomb (for the lack of a better word). Quite a few smartphone makers seem to be in queue to get an X20 under the hood of their phone… which is understandable given the kind of performance it has on offer, as the Helio X20 benchmarks suggest.
Helio X20 benchmarks were just found to have been listed on Geekbench, where the upcoming powerhouse of an SoC posted some crazy scores. Lets take a look.
As you would expect, the deca-core Helio X20’s strength lies in multi-core performance. It will now depend on developers to harness every bit of the available horsepower.
If raw numbers are what you deal in, here’s how the Helio X20 does:
Gizchina News of the week
- Multi-core performance: 7037 points
- Single-core performance: 2094 points
To put things into perspective, the Kirin 950 manages something around 6200 multi-core points, while the Apple A9 does 2500+ single-core points.
And they’re respectively the best in each of the departments (multi- and single-core respectively)… that is, until the Helio X20 finally shows up on a smartphone.
One of the most talked-about upcoming Helio X20 phones has been the un-named Zopo phone. Other companies, probably including Xiaomi, are vying for the first X20 phone in the world.
GREAT !
read this just 10 min ago in Gizmoo china (are they going faster than u or are u guys slacking off or something ?!)
This means we’ll have yet another year of sanely priced Flagships! YAY !
TAKE THAT QUALCOMM !
and THAT KApow and THAT Boum and THAT Sponk
nicely done 😀
Don’t look at total , look at integer , FP and memory.
In single core vs Qualcomm’s new core it wins in integer doing quite a lot better than expected and it almost matches it in FP. In memory Qualcomm does insanely well but the X20 is just 64 bit DDR3 so scores well enough and the memory bandwidth should be enough for just 2 big cores. http://browser.primatelabs.com/geekbench3/compare/4547717?baseline=4630896
Or to put it another way , in single core in integer and FP it’s 50% faster than the Exynos 7420.
Geekbench is not the best at comparing it with PC SoCs but, if you go there… http://browser.primatelabs.com/geekbench3/compare/4618424?baseline=4630896
looks like MTK beats Q’com in CPU department while loses over the memory.
what would be the final effect in real life?
i mean, opening a web page full of JavaScript for example, which one would be faster?
Or a page full of GIF or YouTube videos?
Answer is simple. MTK in this case as browser uses only two big cores. Actually 90% of apps that are in a front plane will use only two, but if you multitask between a 3D game & browser then it would be nice to have another pair of big ones.
Now let’s make things more complicated. 3 CPU cluster SoC combined of two x4 A53 clusters even if only one will be in a active state excluding another one is not exactly the best choice for anything. Regarding the power phase voltage shifting & cost of trade migration between any core (especially if it targets before inactive core) can only contribute to slower & less power efficient execution. On the other hand Kyro cores are not ment to be performance alone star’s we still need to see how efficient they are. Going with the 2x 2 config in S820 whose not really a smart move from Qualcomm & I have my doubts how useful Spectra SIMD will actually be considering it cost. On the other hand I am keen to say that S652 (newly renamed) will be perfectly balanced main stream (upper mid range) product. Having just enough for variety of tasks satisfying vast variety of possible user tasks (CPU, GPU, MMC & other high parallel ones).
MTK’s new & existing offerings are much less balanced, especially in GPU & massive parallel (Multimedia) spheres. Al do they culd easily improve if they are not a cheap basterd’s as the much better licensable IP’s are available in DSP front along with better & wider GPU implementations.
At the end battle remains & I would expect another tire of products around end of 3rd Quarter of 2016 based on 22nm FD-SOI lithography that will be cost, power/performance more optimized than FinFET ones. We will see who will do a best you in that tire.
For now only Samsung amazed a bit as someone who didn’t had a experience in costume CPU design & did a rather great job, won’t actually even count Apple in.
sorry man, I’m really not into that stuff so don’t get mad if i tell you that i’ve read only a quarter of your comment. i appreciated your time though.
i simply wanted to know what regular people like me should expect in real life without going through the technology behind it.
Simply as it can be.
The winer so far in the consumer orientated segment is a S652 & in the flagship range new Exunos.
HiSilicon Kirin 950 would be worth of consideration if they didn’t opted for not good enough DSP (multimedia & cetera abilities) in the lower high end. ?
that helped me better.
thanks for the reply
Android browsing scales on many cores. Same for console ports where Sony and MS use 8 small AMD core.
As for clusters, A53 has a very different efficiency profile and you can’t really fit more than 2xA72 on 20nm but you do need more cores.
What’s better depends on price. SD652 will be a lot slower (and we’ll see how it does thermally) but how good it it depends if it’s 20$ or 40$. It’s on 28nm their costs will certainly be bellow 10$,remains to be seen how much bellow and what margins they are aiming for.
Mediatek’s GPU is just fine,better than one twice the size that just throttles to half the clocks , aimed at good scores in benchmarks and nothing else. very few actual users play GPU intensive games anyway.
Agree with you about the GPU needed by few heavy gamers, but
why the CPU doesn’t follow the same logic?
Why are you so fanatic of CPUs?
It does follow the same logic, i am not complaining that there aren’t 4 big cores. You do actually need more than 2 big cores, you might not need 8 small ones ,perf wise, but there should be a significant benefit power wise. The A53 are also rather small so the cost is reasonable.Mediatek claimed this kind of power savings by using 3 clusters http://www.extremetech.com/wp-content/uploads/2015/05/MediaTek-Power.png
that brings directly to my point.
i appreciate any effort to get a better power management rather than raw power for benchmark fanatics.
But i understand the business purpose, a more powerful SoC sells better than a more power efficient one.
Actually best method for geting a power consumption in limits is still a load balancing across the range of available cores (or clusters in the GPU’s case) rather than big litle or turbo boost or anything else.
Scheduler logic is advancing towards prediction load in a small steps but we will need to wait couple more years that it matures.
Actual power saving modes do just that that they limit CPU (& possibly gpu) max frequency limit.
you are the man!
do you have a master degree in engeneering or so?
I can tell that other than the passion you also have knowledge.
it’s nice to know something more about these stuff even though I never go into this depth.
Happy holidays, see you around.
I am just an enthusiasts & like to read a lot about it (a real read of wite papers, scientific works & compenhensiv tests) i also part timely write about it locally that is when their is something worth of writing about.
It’s actually simple math 4 equal cores running @ 400 MHz will do approximately 1.5 times the work of a same single core working @ 1GHz wile consuming the same amount of energy. This is a simple example of a load balancing.
Happy Holidays & all the best!
Don’t spill your bullshits on me.
Read & get some education.
http://www.moorinsightsstrategy.com/research-paper-do-8-cores-really-matter-in-smartphones/
Their is zero browser engines that can use more than two cores. First one will be Mozilla Servo when it gets out next year & if it gets out.
S6520 won’t be a lot slower, it will be a bit slower in most cpu based tasks but still more than fast enough for usage. GPU is important & especially to some of us that use emulators to play with. At the end & multimedia & other high parallel job’s are very important to most people. & the last bullshit about how you don’t need a bigger gpu I live to You as you quoted MTK. Even if your app doesn’t full potential of the gpu 4 clusters at 300 MHz will actually consume less energy than 2 same clusters @ 600MHz. That & similar statements are for cheap basterd’s that make a cheap SoC’s wile trying to convince you how their shit is better than others people gold.
lol i know that study and it is as dumb as it gets, actually it’s the second dumbest study i’ve seen this year. They test SoCs and devices known to be overheating and that’s why the results are what they are. The problem there is the hardware and leads to the wrong conclusion.
SD652 is using A72 at 1.8GHz, X20 is using A72 at 2.5GHz and scores better than expected at that clock. Here a comparison http://browser.primatelabs.com/geekbench3/compare/4558278?baseline=4630896
Fast enough is subjective, plenty of users find an A53 fast enough.
Even if you deem the GPU as important,the fact remains that SD810 or Samsung use a GPU that doesn’t fit in the thermal budget and sustained performance is half of what you see in benchmarks. Mediatek uses a balanced approach when it comes to perf, power and costs. They could go for a bigger GPU at lower clocks for a slight gain but that wouldn’t be cost effective and you can only do that in the very high end.
It is also lovely that you whine about this GPU but you like the SD652, a SoC with even weaker GPU.
You seem to have a need to prove that SD652 is “better”. SD652 is as good as it’s price and a users needs , same for X20 and SD820. The 3 are not direct competitors and all will be fine at certain price points.
Well they used actual SoC’s that are currently available. Every SoC made this year will have a thermal throttling issues & will go over 2.5W if you know how to tax it including undercooked MT6735 @ GHz & S410. Actually they will go over 2W jest with CPU’s fully utilized & you can test this with a profiler & let’s say 7 Zipp let it compress more than 5 min with a screen on. GPU is a lot bigger cluster vs CPU core & it spends a lot more power naturally.
Now let’s talk a little about golden values of silicon when it comes to to power consumption. As the material leaking starts at 400MHz, it’s sustainable up to let’s say 1GHz after that leaking is huge & power consumption jumps almost progressive. As I explained you that even quad core A53 will brick a power limit (in severe use) their is not much to talk about. Every simple manufacturer rapes the Silicon pushing frequency & trying to save on a die size to minimize cost & max the profits. This is not really a good engineering. MTK studs up among others by pushing this even more than others. When it comes to to GPU’s Qualcomm stil have edge as they still go with monolithic single (large) cluster design. This approach costs more to develop but gains are that it scales 100 performance vise to a solution where you add more than one cluster, for example first T760 cluster gives 100% & every other around 68~70%. Naturally this approach will come & less power (one cluster) for same lv of performance. I wouldn’t call the A520 a weaker GPU then T880 MP4. All Adrenos have a problem with any kind of surface texture transrorma & it shows in this particular test & even more in Antutu 3D Anarchy driven with lots of water surfaces, old ATI design flow. 3D Mark is let’s say best one for comparation currently available. Driver’s are another story but I won’t go into that this time let’s just say how & Mali & Adreno are on a bad side there.
The S652 is a direct competition to X20 & it’s a better product & will be higher priced as always but those 10~15$ doesn’t change much in a final product pricing.
Thing is that S652 is best balanced SoC announced up to date that will give a best user experience across the board; faster than S820 in CPU tasks (most folks would agree with me this is enough), good multitasking abilities, good enough for FHD gaming, rather good multimedia capabilities including 4K video recording & cetera thanks to QDSP 680, good camera IPS, certainly better cellular radio than competition (at least Aple knows what he is paying for).
MTK had a design victory this year with MT6732 instead of keeping that & adding cellular radio to the SoC (as integrated part) they made a cheaper & much worse 67×5 series. Even worse part is that MT6732 & MT6752 parts are discontinued without any support any more.
So I ask you once more would you actually pay 15$ more for a product that is better balanced to your possible needs & will be nicer to the battery life & dose have a secure open stock suport (CAF) for at least 2 years (as you won’t use that phone probably more than 2 years)?
It’s actually a simple question & don’t think that I am a large fun of QC.
In practice the software, the connectivity and even the storage has a huge impact on browsing so the end result depends less on the SoC.
X20 is not really a direct competitor for SD820, should be a bit cheaper and X30 would aim to compete with SD820. Remains to be seen how aggressive Qualcomm is with SD652/650 pricing and if Mediatek tries to undermine those and SD820 with it’s pricing.
The integer score was quite a bit better than expected here, from what MT8173 and Kirin 950 have shown us, expectations were around 2100 not over 2400. Here the X20 vs Kirin 950 http://browser.primatelabs.com/geekbench3/compare/4632468?baseline=4630896
Really I couldn’t care less about SoC benchmarks nowadays as you can’t really go wrong with any of them as they all (well, most) tend to give more performance than what most people need. Actually, I haven’t installed or ran any benchmarking apps on my last 2 phones now. I look for the more end-user tangible features now like camera, battery life, screen (quality), and the overall style, look, and feel of the phone.
Oh snap, maybe I’m getting old.
Not old, the SoC’s are getting really good lately. The only interest on buying a flagship in 2 years will be to use it as the core of a computer when docked beside a display + keyboard & mouse (convergence). And it’s not difficult to imagine that this tasks would be perfectly accomplished by a mid ranger in a not distant future. Can’t wait for that!
@disqus_vsEtVPGrtX:disqus I agree with you. and i know where you are coming from Continumm from Microsoft; which enables a docket to connect your phone with keyb mouse and monitor… productivity level at its best.
Well, I had Canonical’s idea of convergence in mind, but I guess MS has taken the lead (the approach is not quite the same); hopefully we’ll have a nice dose of converged devices starting from next year.
Wouldn’t bet on m$ & it’s a complete different approach. Canonical wanted to run a full image of Ubuntu on the phone & M$ is just justifying a need how we need desktop x86 windows.
Their are a cheap last year set top boxes you know that run a Ubuntu & Android decently to play with.
I know, I’m with you, I’ll just wait with my 2014 OnePlus One until convergence is a fact, probably by mid 2017, and vendors embrace Canonical’s vision… hopefully Meizu!
I always love seeing comments from people complaining about the SoC and then when you ask them what they need more power for, they have no response. Even entry level SoC’s these days offer more power than the average user will ever need. Hardcore gamers and those who do a lot of video editing are the only ones who ever need all the power these chips have to offer.
Do a repack of a large game in the 7 Zipp & think again about it will it’s on the way. Look at this as a real time benchmark & in that light you can use profiler to get it on the real benchmarking scale.
There still are a usable futures in consumer sphere that need a really good kick from general purpose cores & there will always be.
I actually want that my phablets in the time that will come represents my main & complete computing platform or PC platform as known.
sorry if i intervene here but in my opinion your example doesn’t make any sense.
Why would anyone want to repack a large game in 7zip?
that’s not what a phone is meant to be.
people download a large game from the store and that’s it, no reason to repack it, unless you are a cracker.
but then I’m expecting they use a pc connected to the phone and zip it through the computer.
I really don’t see any reasonable scenario where all that power is needed.
Maybe video editing, but still i think it’s silly to do though a smart phone.
simply my opinion.
7 Zipp simply scales best on multicore & it’s a good enough for archive that’s why. I don’t have to be a pirate to have a large archive on the phone, I can simply use one for backups of various things, a one fast and scalable enough. The rest of the coment above explains the rest of it.
How about video editing on 4K 65″ TV & using a keyboard and mouse?
🙂
i guess you want to use your phone to replace your pc completely.
then the phone manufacturers should start providing a couple of USB ports, HDMI OUT, SDIF, a LAN port…
I am saying it would be a normal thing to expect in the future as a evolution of P. C. & have a litle more faith in wireless protocols.
“To put things into perspective, the Kirin 950 manages something around 6200 multi-core points, while the Apple A9 does 2500+ single-core points.”
Some more perspective on it: Intel’s best single core score is about 4400, whereas a still highly respectable score is the i7-4820k’s 3300… this leaves Apple down only to about 75% the performance of a solid, and still expensive, intel part. The highest scoring AMD part in the single core test is the FX-9590, which roughly ties the A9 at 2461.
PC desktop users considering upgrades are told that it is okay to upgrade unless you bought in the last year or two, but aren’t really urged to upgrade unless they are at SNB or below. A common score for a “reasonably powerful” PC that doesnt need upgrading, then, might be well represented by the i5-3570k, which scores: 3360/10800, and the X20 is scoring about 70% of those numbers.
A minimum “reasonably powerful desktop” might be better represented by AMD, with the A10-7700k, a quad core that doesn’t burn ginormous quantities of power or cash. It scores 2070/6140 — meaning it is slightly weaker than the Helio X20.
You really need to do a real one by one test from the source (optimized compiling it) & exclude any crypto present in geekbench3 so that Cisc & RISC could get any sense & even then it would be just a possible performance metric.
You see ARM is still a rather pore on the SIMD aria along with compilers & tools. Something that whose a rather long time built for X86 platform, SIMD extension variety along with specialized tool chains, compilers & match libs are fronts where ARM simple can not (still) compete with X86. It’s kind of sad that ARM is selling extra fee (& expensive one) for their optimized match libs as they won’t get far this way, on the other hand ARM GGC is far behind wile LLVM with its wide engagement & lost of back end funding (Aple, Samsung…) along with lots of branches (Clang, Zapcc…) show a lot’s of potential they will actually need 2~3 more years of development to take a lead.
Even though I think we basically disagree here, voted you up because you made a different claim than I would have expected at first. It is good to learn something new! thanks.
> You really need to do a real one by one test from the source (optimized compiling it) & exclude any crypto present in geekbench3 so that Cisc & RISC could get any sense
Well I don’t need to do that. 🙂
Actually most people feel the other way. The reason the benchmark receives flack about its x86 scores is that it removed optimizations because intel’s own compilers were skipping repetitious steps through optimization. But actually measuring the time it takes to do the work is the whole point of the benchmark. You specifically want it without that kind of optimization. So no, I disagree with the majority as much as I disagree with your line of argument too; RISC compares quite fairly to CISC in the benchmark.
ARM might perform a bit better with a better optimized compilation, and CISC might be able to skip doing the work entirely to some degree in real world scenarios… but measuring what they actually do, the poor optimizations allowed are “close enough”.
Just a small lib with 400 math functions for the comparation purposes as you will see & almost complete list of popular X86 compilers. The project it self is more of a academic but it’s remarkable not tied to anything or anyone when it comes to implementing it.
http://www.yeppp.info/benchmarks.html
Comparing CISC intel x86 cpu to RISC Arm cpu is pointless. It’s true that RISC arm cpu has became very powerful and came a long way in hardware. But then again this chipset is using more cores (8 cores) vs intel/amd (4 cores). Octa core cpu’s are way overpowered for their own good. Most smart phone consumers only use apps like facebook, instagram, vine, snapcat, whatsapps and some games. And I highly doubt you need full 8 cores cpu for that. Games like asphalt 8 and dead trigger 2 only use two cores and highly depended on gpu power alone. Single core performance is really more important on smart phones then multi core. Not saying higher cores like six cores or octa cores are useless. That’s why intel haswell-e, broadwell-e and intel xeon cpu exists. Though I still don’t get why intel is still using dual cores. Dual cores should have been phased out long time ago. Quad cores should be the standard with hexa or octa cores should be on high end pc’s. Let’s just hope intel canonlake cpu’s fixes all of that.
GREAT !
read this just 10 min ago in Gizmoo china (are they going faster than u or are u guys slacking off or something ?!)
This means we’ll have yet another year of sanely priced Flagships! YAY !
TAKE THAT QUALCOMM !
and THAT KApow and THAT Boum and THAT Sponk
nicely done 😀
Don’t look at total , look at integer , FP and memory.
In single core vs Qualcomm’s new core it wins in integer doing quite a lot better than expected and it almost matches it in FP. In memory Qualcomm does insanely well but the X20 is just 64 bit DDR3 so scores well enough and the memory bandwidth should be enough for just 2 big cores. http://browser.primatelabs.com/geekbench3/compare/4547717?baseline=4630896
Or to put it another way , in single core in integer and FP it’s 50% faster than the Exynos 7420.
Geekbench is not the best at comparing it with PC SoCs but, if you go there… http://browser.primatelabs.com/geekbench3/compare/4618424?baseline=4630896
looks like MTK beats Q’com in CPU department while loses over the memory.
what would be the final effect in real life?
i mean, opening a web page full of JavaScript for example, which one would be faster?
Or a page full of GIF or YouTube videos?
Answer is simple. MTK in this case as browser uses only two big cores. Actually 90% of apps that are in a front plane will use only two, but if you multitask between a 3D game & browser then it would be nice to have another pair of big ones.
Now let’s make things more complicated. 3 CPU cluster SoC combined of two x4 A53 clusters even if only one will be in a active state excluding another one is not exactly the best choice for anything. Regarding the power phase voltage shifting & cost of trade migration between any core (especially if it targets before inactive core) can only contribute to slower & less power efficient execution. On the other hand Kyro cores are not ment to be performance alone star’s we still need to see how efficient they are. Going with the 2x 2 config in S820 whose not really a smart move from Qualcomm & I have my doubts how useful Spectra SIMD will actually be considering it cost. On the other hand I am keen to say that S652 (newly renamed) will be perfectly balanced main stream (upper mid range) product. Having just enough for variety of tasks satisfying vast variety of possible user tasks (CPU, GPU, MMC & other high parallel ones).
MTK’s new & existing offerings are much less balanced, especially in GPU & massive parallel (Multimedia) spheres. Al do they culd easily improve if they are not a cheap basterd’s as the much better licensable IP’s are available in DSP front along with better & wider GPU implementations.
At the end battle remains & I would expect another tire of products around end of 3rd Quarter of 2016 based on 22nm FD-SOI lithography that will be cost, power/performance more optimized than FinFET ones. We will see who will do a best yet in that tire.
For now only Samsung amazed a bit as someone who didn’t had a experience in costume CPU design & did a rather great job, won’t actually even count Apple in.
sorry man, I’m really not into that stuff so don’t get mad if i tell you that i’ve read only a quarter of your comment. i appreciated your time though.
i simply wanted to know what regular people like me should expect in real life without going through the technology behind it.
Simply as it can be.
The winer so far in the consumer orientated segment is a S652 & in the flagship range new Exunos.
HiSilicon Kirin 950 would be worth of consideration if they didn’t opted for not good enough DSP (multimedia & cetera abilities) in the lower high end. 😉
that helped me better.
thanks for the reply
Android browsing scales on many cores. Same for console ports where Sony and MS use 8 small AMD core.
As for clusters, A53 has a very different efficiency profile and you can’t really fit more than 2xA72 on 20nm but you do need more cores.
What’s better depends on price. SD652 will be a lot slower (and we’ll see how it does thermally) but how good it it depends if it’s 20$ or 40$. It’s on 28nm their costs will certainly be bellow 10$,remains to be seen how much bellow and what margins they are aiming for.
Mediatek’s GPU is just fine,better than one twice the size that just throttles to half the clocks , aimed at good scores in benchmarks and nothing else. very few actual users play GPU intensive games anyway.
In practice the software, the connectivity and even the storage has a huge impact on browsing so the end result depends less on the SoC.
X20 is not really a direct competitor for SD820, should be a bit cheaper and X30 would aim to compete with SD820. Remains to be seen how aggressive Qualcomm is with SD652/650 pricing and if Mediatek tries to undermine those and SD820 with it’s pricing.
The integer score was quite a bit better than expected here, from what MT8173 and Kirin 950 have shown us, expectations were around 2100 not over 2400. Here the X20 vs Kirin 950 http://browser.primatelabs.com/geekbench3/compare/4632468?baseline=4630896
Agree with you about the GPU needed by few heavy gamers, but
why the CPU doesn’t follow the same logic?
Why are you so fanatic of CPUs?
It does follow the same logic, i am not complaining that there aren’t 4 big cores since 4 would have to be clocked a lot lower to fit in the TDP.. You do actually need more than 2 cores, you might not need 8 small ones with the 2 big ones ,perf wise, but there should be a significant benefit power wise. The A53 are also rather small so the cost is reasonable.Mediatek claimed this kind of power savings by using 3 clusters
that brings directly to my point.
i appreciate any effort to get a better power management rather than raw power for benchmark fanatics.
But i understand the business purpose, a more powerful SoC sells better than a more power efficient one.
Don’t spill your bullshits on me.
Read & get some education.
http://www.moorinsightsstrategy.com/research-paper-do-8-cores-really-matter-in-smartphones/
Their is zero browser engines that can use more than two cores. First one will be Mozilla Servo when it gets out next year & if it gets out.
S6520 won’t be a lot slower, it will be a bit slower in most cpu based tasks but still more than fast enough for usage. GPU is important & especially to some of us that use emulators to play with. At the end & multimedia & other high parallel job’s are very important to most people. & the last bullshit about how you don’t need a bigger gpu I live to you as you quoted MTK. Even if your app doesn’t full potential of the gpu 4 clusters at 300 MHz will actually consume less energy than 2 same clusters @ 600MHz. That & similar statements are for cheap basterd’s that make a cheap SoC’s wile trying to convince you how their shit is better than others people gold.
lol i know that study and it is as dumb as it gets, actually it’s the second dumbest study i’ve seen this year. They test SoCs and devices known to be overheating and that’s why the results are what they are. The problem there is the hardware and leads to the wrong conclusion.
SD652 is using A72 at 1.8GHz, X20 is using A72 at 2.5GHz and scores better than expected at that clock. Here a comparison http://browser.primatelabs.com/geekbench3/compare/4558278?baseline=4630896
Fast enough is subjective, plenty of users find an A53 fast enough.
Even if you deem the GPU as important,the fact remains that SD810 or Samsung use a GPU that doesn’t fit in the thermal budget and sustained performance is half of what you see in benchmarks. Mediatek uses a balanced approach when it comes to perf, power and costs. They could go for a bigger GPU at lower clocks for a slight gain but that wouldn’t be cost effective and you can only do that in the very high end.
It is also lovely that you whine about this GPU but you like the SD652, a SoC with a weaker GPU.And ofc your beloved 652 is 8 cores, not much of a difference between 10 and 8 in the end if Android would only use 2 cores like you pretend.
You seem to have a need to prove that SD652 is “better”. SD652 is as good as it’s price and a users needs , same for X20 and SD820. The 3 are not direct competitors and all will be fine at certain price points.
Btw here a SD652 3D Mark run
Well they used actual SoC’s that are currently available. Every SoC made this year will have a thermal throttling issues & will go over 2.5W if you know how to tax it including undercloked MT6735 @ GHz & S410. Actually they will go over 2W jest with CPU’s fully utilized & you can test this with a profiler & let’s say 7 Zipp let it compress more than 5 min with a screen on. GPU is a lot bigger cluster vs CPU core & it spends a lot more power naturally.
Now let’s talk a little about golden values of silicon when it comes to to power consumption. As the material leaking starts at 400MHz, it’s sustainable up to let’s say 1GHz after that leaking is huge & power consumption jumps almost progressive. As I explained you that even quad core A53 will brick a power limit (in severe use) their is not much to talk about. Every simple manufacturer rapes the Silicon pushing frequency & trying to save on a die size to minimize cost & max the profits. This is not really a good engineering. MTK studs up among others by pushing this even more than others. When it comes to to GPU’s Qualcomm stil have edge as they still go with monolithic single (large) cluster design. This approach costs more to develop but gains are that it scales 100% performance vise to a solution where you add more than one cluster, for example first T760 cluster gives 100% & every other around 68~70%. Naturally this approach will consume & less power (one cluster) for same lv of performance. I wouldn’t call the A520 a weaker GPU then T880 MP4. All Adrenos have a problem with any kind of surface texture transrorma & it shows in this particular test & even more in Antutu 3D Anarchy driven with lots of water surfaces, old ATI design flow. 3D Mark is let’s say best one for comparation currently available. Driver’s are another story but I won’t go into that this time let’s just say how & Mali & Adreno are on a bad side there.
The S652 is a direct competition to X20 & it’s a better product & will be higher priced as always but those 10~15$ doesn’t change much in a final product pricing.
Thing is that S652 is best balanced SoC announced up to date that will give a best user experience across the board; faster than S810 in CPU tasks (most folks would agree with me this is enough), good multitasking abilities, good enough for FHD gaming, rather good multimedia capabilities including 4K video recording & cetera thanks to QDSP 680, good camera IPS, certainly better cellular radio than competition (at least Aple knows what he is paying for).
MTK had a design victory this year with MT6732 instead of keeping that & adding cellular radio to the SoC (as integrated part) they made a cheaper & much worse 67×5 series. Even worse part is that MT6732 & MT6752 parts are discontinued without any support any more.
So I ask you once more would you actually pay 15$ more for a product that is better balanced to your possible needs & will be nicer to the battery life & dose have a secure open stock suport (CAF) for at least 2 years (as you won’t use that phone probably more than 2 years)?
It’s actually a simple question & don’t think that I am a large fun of QC (I actually hate them).
Actually best method for geting a power consumption in limits is still a load balancing across the range of available cores (or clusters in the GPU’s case) rather than big litle or turbo boost or anything else.
Scheduler logic is advancing towards prediction load in a small steps but we will need to wait couple more years that it matures.
Actual power saving modes do just that that they limit CPU (& possibly gpu) max frequency limit.
you are the man!
do you have a master degree in engeneering or so?
I can tell that other than the passion you also have knowledge.
it’s nice to know something more about these stuff even though I never go into this depth.
Happy holidays, see you around.
I am just an enthusiasts & like to read a lot about it (a real read of wite papers, scientific works & compenhensiv tests) i also part timely write about it locally that is when their is something worth of writing about.
It’s actually simple math 4 equal cores running @ 400 MHz will do approximately 1.5 times the work of a same single core working @ 1GHz wile consuming the same amount of energy. This is a simple example of a load balancing.
Happy Holidays & all the best!
Really I couldn’t care less about SoC benchmarks nowadays as you can’t really go wrong with any of them as they all (well, most) tend to give more performance than what most people need. Actually, I haven’t installed or ran any benchmarking apps on my last 2 phones now. I look for the more end-user tangible features now like camera, battery life, screen (quality), and the overall style, look, and feel of the phone.
Oh snap, maybe I’m getting old.
Not old, the SoC’s are getting really good lately. The only interest on buying a flagship in 2 years will be to use it as the core of a computer when docked beside a display + keyboard & mouse (convergence). And it’s not difficult to imagine that this tasks would be perfectly accomplished by a mid ranger in a not distant future. Can’t wait for that!
I always love seeing comments from people complaining about the SoC and then when you ask them what they need more power for, they have no response. Even entry level SoC’s these days offer more power than the average user will ever need. Hardcore gamers and those who do a lot of video editing are the only ones who ever need all the power these chips have to offer.
Do a repack of a large game in the 7 Zipp & think again about it will it’s on the way. Look at this as a real time benchmark & in that light you can use profiler to get it on the real benchmarking scale.
There still are a usable futures in consumer sphere that need a really good kick from general purpose cores & there will always be.
I actually want that my phablets in the time that will come represents my main & complete computing platform or PC platform as known.
sorry if i intervene here but in my opinion your example doesn’t make any sense.
Why would anyone want to repack a large game in 7zip?
that’s not what a phone is meant to be.
people download a large game from the store and that’s it, no reason to repack it, unless you are a cracker.
but then I’m expecting they use a pc connected to the phone and zip it through the computer.
I really don’t see any reasonable scenario where all that power is needed.
Maybe video editing, but still i think it’s silly to do though a smart phone.
simply my opinion.
7 Zipp simply scales best on multicore & it’s a good enough for archive that’s why. I don’t have to be a pirate to have a large archive on the phone, I can simply use one for backups of various things, a one fast and scalable enough. The rest of the coment above explains the rest of it.
How about video editing on 4K 65″ TV & using a keyboard and mouse?
@disqus_vsEtVPGrtX:disqus I agree with you. and i know where you are coming from Continumm from Microsoft; which enables a docket to connect your phone with keyb mouse and monitor… productivity level at its best.
🙂
i guess you want to use your phone to replace your pc completely.
then the phone manufacturers should start providing a couple of USB ports, HDMI OUT, SDIF, a LAN port…
I am saying it would be a normal thing to expect in the future as a evolution of P. C. & have a litle more faith in wireless protocols.
Well, I had Canonical’s idea of convergence in mind, but I guess MS has taken the lead (the approach is not quite the same); hopefully we’ll have a nice dose of converged devices starting from next year.
Wouldn’t bet on M$ & it’s a complete different approach. Canonical wanted to run a full image of Ubuntu on the phone & M$ is just justifying a need how we need desktop x86 windows.
Their are a cheap last year set top boxes you know that run a Ubuntu & Android decently to play with.
I know, I’m with you, I’ll just wait with my 2014 OnePlus One until convergence is a fact, probably by mid 2017, and vendors embrace Canonical’s vision… hopefully Meizu!
Why do we need these 10 cores?
8 cores of x20 are just the same as low-end mt6753
Why not just make it dual-core which will be much cheaper?
to save battery while on light duties
why not just lower the frequency to e.g. 300 mhz?
my answer is that 10 cores are just marketing
look at very-hot-and-outdated-for-now s800 – 300 mhz do not consume battery
the only states that influence the battery life are 2+ ghz where the SoC is heating
because you are not in business so you don’t consider the marketing aspect of it.
Q’com , MTK, Hisilicon now use Antutu results on their slideshows.
If they only care of battery management then they will wiped off from the more performant competitors.
they are pushing for a more performant SoC every year and obviously they have to take care of power management.
No one has the best recipe so far, MTK is trying with 10 cores, Q’com with 4. the way they achieve it shouldn’t really concern us, as long as it works.
If it doesn’t work then you can be sure that customers will stay away from it. I mean, look at the SD810, it was a disaster and Q’com sales went down.
to answer Filipp question down there:
There are already dual-core SoCs being cheaper.
Here Mediatek and Qualcom and others are looking to bake 3 different situation chips in one solution. vague use for simple tasks like messaging and internet browsing, then mediun use applications like soft games and editing files and certain others, all while being able to run power hungry apps/games/cameras that require top shelf performance.
you see, apple has high single core performance but those are the only 2 cores for everything saving them money but also burning power as crazy as we all now iphones drain batteries as such.
so basicly MTK designed this X20 to be an Iphone performer when needed with the 2 higher cores but put them to rest when you just need to make a call or send a simple whatsapp message or chat for a while while the battery gets proper efficient use in a more relaxed power drain by a more tolerant SoC drainage….
thats the way i see it , if anyone cares to correct me , please feel welcome.
long live MTK X20, X30, X40 and whatever… just keep them cheap hahahaha!!!
Actually their is a need for another dedicated multi purpose cluster but not as aditive or inter cluster one. Up until now it whosent been possible to implement one as their whose not a ARM v8 (64bit) licensable (or any other) solution that whole power/cost optimized enough to justify it. Now with Cortex A35 this change it a lot. When I say dedicated cluster I mean one made out of Two A35’s clocked up to 400 MHz with additional DSP blocks that will be able to address all needs of a device in a inactive mode (messages & cetera along with wite noise and storage IO’s & other light tasks as well as music playback) leaving bigger ones in a sleep mode wile enough powerful for waking up process to go smooth. In a active use state of the device they still can perform a peripheral off loading tasks from other bigger clusters as storage IO controllers, for music playback & cetera.
For a main tasks and mainstream the two quad clusters in a big litle config still stays a best solution considering & multitasking needs. The next engineering task would be a creating a better & more efficient mid range performer as for instance Krait cores whose in a ARM V7 (32 bit) architecture or Cortex M7 still is. Using only one quad cluster with a scaling up conditional between 3 main states (golden 400MHz, balanced 1~1.2GHz & max more than 1.2GHz) with let’s say values of 82% max load for a period needed to make a blink of an eye would be a best that it can be achieved right now considering user experience quality/power consumption/performance metric. Along with a small dedicated cluster I mentioned of course.
“To put things into perspective, the Kirin 950 manages something around 6200 multi-core points, while the Apple A9 does 2500+ single-core points.”
Some more perspective on it: Intel’s best single core score is about 4400, whereas a still highly respectable score is the i7-4820k’s 3300… this leaves Apple down only to about 75% the performance of a solid, and still expensive, intel part. The highest scoring AMD part in the single core test is the FX-9590, which roughly ties the A9 at 2461. Of course all of this is only single core and so not indicative of overall processor performance, and total system performance depends on peripherals, apps, and use cases as well as the cpu and ram covered by this benchmark.
PC desktop users considering upgrades are told that it is okay to upgrade unless you bought in the last year or two, but aren’t really urged to upgrade unless they are at SNB or below. A common score for a “reasonably powerful” PC that doesnt need upgrading, then, might be well represented by the i5-3570k, which scores: 3360/10800 (single and multi), and the X20 is scoring about 70% of those numbers.
A minimum “reasonably powerful desktop” might be better represented by AMD, with the A10-7700k, a quad core that doesn’t burn ginormous quantities of power or cash. It scores 2070/6140 — meaning it is slightly weaker than the Helio X20.
You really need to do a real one by one test from the source (optimized compiling it) & exclude any crypto present in geekbench3 so that Cisc & RISC could get any sense & even then it would be just a possible performance metric.
You see ARM is still a rather pore on the SIMD aria along with compilers & tools. Something that whose a rather long time built for X86 platform, SIMD extension variety along with specialized tool chains, compilers & math libs are fronts where ARM simple can not (still) compete with X86. It’s kind of sad that ARM is selling extra fee (& expensive one) for their optimized match libs as they won’t get far this way, on the other hand ARM GGC is far behind wile LLVM with its wide engagement & lost of back end funding (Aple, Samsung…) along with lots of branches (Clang, Zapcc…) show a lot’s of potential they will actually need 2~3 more years of development to take a lead.
Comparing CISC intel x86 cpu to RISC Arm cpu is pointless. It’s true that RISC arm cpu has became very powerful and came a long way in hardware. But then again this chipset is using more cores (8 cores) vs intel/amd (4 cores). Octa core cpu’s are way overpowered for their own good. Most smart phone consumers only use apps like facebook, instagram, vine, snapcat, whatsapps and some games. And I highly doubt you need full 8 cores cpu for that. Games like asphalt 8 and dead trigger 2 only use two cores and highly depended on gpu power alone. Single core performance is really more important on smart phones then multi core. Not saying higher cores like six cores or octa cores are useless. That’s why intel haswell-e, broadwell-e and intel xeon cpu exists. Though I still don’t get why intel is still using dual cores. Dual cores should have been phased out long time ago. Quad cores should be the standard with hexa or octa cores should be on high end pc’s. Let’s just hope intel canonlake cpu’s fixes all of that.
Even though I think we basically disagree here, voted you up because you made a different claim than I would have expected at first. It is good to learn something new! thanks.
> You really need to do a real one by one test from the source (optimized compiling it) & exclude any crypto present in geekbench3 so that Cisc & RISC could get any sense
Well I don’t need to do that. 🙂
Actually most people feel the other way. The reason the benchmark receives flack about its x86 scores is that it removed optimizations because intel’s own compilers were skipping repetitious steps through optimization. But actually measuring the time it takes to do the work is the whole point of the benchmark. You specifically want it without that kind of optimization. So no, I disagree with the majority as much as I disagree with your line of argument too; RISC compares quite fairly to CISC in the benchmark.
ARM might perform a bit better with a better optimized compilation, and CISC might be able to skip doing the work entirely to some degree in real world scenarios… but measuring what they actually do, the poor optimizations allowed are “close enough”.
Just a small lib with 400 math functions for the comparation purposes as you will see & almost complete list of popular X86 compilers. The project it self is more of a academic but it’s remarkable not tied to anything or anyone when it comes to implementing it.
http://www.yeppp.info/benchmarks.html
Why do we need these 10 cores?
8 cores of x20 are just the same as low-end mt6753
Why not just make it dual-core which will be much cheaper?
to save battery while on light duties
why not just lower the frequency to e.g. 300 mhz?
my answer is that 10 cores are just marketing
look at very-hot-and-outdated-for-now s800 – 300 mhz do not consume battery
the only states that influence the battery life are 2+ ghz where the SoC is heating
because you are not in business so you don’t consider the marketing aspect of it.
Q’com , MTK, Hisilicon now use Antutu results on their slideshows.
If they only care of battery management then they will wiped off from the more performant competitors.
they are pushing for a more performant SoC every year and obviously they have to take care of power management.
No one has the best recipe so far, MTK is trying with 10 cores, Q’com with 4. the way they achieve it shouldn’t really concern us, as long as it works.
If it doesn’t work then you can be sure that customers will stay away from it. I mean, look at the SD810, it was a disaster and Q’com sales went down.
to answer Filipp question down there:
There are already dual-core SoCs being cheaper.
Here Mediatek and Qualcom and others are looking to bake 3 different situation chips in one solution. vague use for simple tasks like messaging and internet browsing, then mediun use applications like soft games and editing files and certain others, all while being able to run power hungry apps/games/cameras that require top shelf performance.
you see, apple has high single core performance but those are the only 2 cores for everything saving them money but also burning power as crazy as we all now iphones drain batteries as such.
so basicly MTK designed this X20 to be an Iphone performer when needed with the 2 higher cores but put them to rest when you just need to make a call or send a simple whatsapp message or chat for a while while the battery gets proper efficient use in a more relaxed power drain by a more tolerant SoC drainage….
thats the way i see it , if anyone cares to correct me , please feel welcome.
long live MTK X20, X30, X40 and whatever… just keep them cheap hahahaha!!!
Actually their is a need for another dedicated multi purpose cluster but not as aditive or inter cluster one. Up until now it whosent been possible to implement one as their whose not a ARM v8 (64bit) licensable (or any other) solution that whole power/cost optimized enough to justify it. Now with Cortex A35 this change it a lot. When I say dedicated cluster I mean one made out of Two A35’s clocked up to 400 MHz with additional DSP blocks that will be able to address all needs of a device in a inactive mode (messages & cetera along with wite noise and storage IO’s & other light tasks as well as music playback) leaving bigger ones in a sleep mode wile enough powerful for waking up process to go smooth. In a active use state of the device they still can perform a peripheral off loading tasks from other bigger clusters as storage IO controllers, for music playback & cetera.
For a main tasks and mainstream the two quad clusters in a big litle config still stays a best solution considering & multitasking needs. The next engineering task would be a creating a better & more efficient mid range performer as for instance Krait cores whose in a ARM V7 (32 bit) architecture or Cortex M7 still is. Using only one quad cluster with a scaling up conditional between 3 main states (golden 400MHz, balanced 1~1.2GHz & max more than 1.2GHz) with let’s say values of 82% or more max load for a period needed to make a blink of an eye would be a best that it can be achieved right now considering user experience quality/power consumption/performance metric. Along with a small dedicated cluster I mentioned of course.
this is ridiculous… even though we shouldn’t comparing pc spec with mobile spec , but at this point its kinda interesting.
for today’s AAA game all you need is 4 core proc , where high end 6 giving around the same amount of performance with the 4 core proc. my point is , more core =/ better performance (in most cases)
why a phone needs 10 core ? I got lost here ….
I actually prefer apple’s move , where they boost the single core performance …
even with 3 workload (low , medium and high) all you need is 6 (or 3) core
==============
and the article’s title is a bit misleading as well. X20 decimates other SOC ?
Samsung’s upcoming Exynos 8890 geekbench score is 7400 as per this article :
http://www.phonearena.com/news/Snapdragon-820-vs-Exynos-8890-leaked-multi-core-Geekbench-result-chart-shows-Samsung-advantage_id75810
My god this is going to be one hell of a processor….
this is ridiculous… even though we shouldn’t comparing pc spec with mobile spec , but at this point its kinda interesting.
for today’s AAA game all you need is 4 core proc , where high end 6 giving around the same amount of performance with the 4 core proc. my point is , more core =/ better performance (in most cases)
why a phone needs 10 core ? I got lost here ….
I actually prefer apple’s move , where they boost the single core performance …
even with 3 workload (low , medium and high) all you need is 6 (or 3) core
==============
and the article’s title is a bit misleading as well. X20 decimates other SOC ?
Samsung’s upcoming Exynos 8890 geekbench score is 7400 as per this article :
http://www.phonearena.com/news/Snapdragon-820-vs-Exynos-8890-leaked-multi-core-Geekbench-result-chart-shows-Samsung-advantage_id75810
My god this is going to be one hell of a processor….
Makes me salivate seeing these scores, but even the modern MediaTek chips have extremely poor GPS reception – and practically no GPS is used like a navigator device (no mobile data, no SIMs, no Wi-Fi, device only mode with any offline maps app, no EPO files) – under these constraints, MediaTek SoCs basically don’t have GPS.
What a shame, numbers wise, this is my kind of processor.
Yes, sure. That’s why I’ve two ‘modern mtk chips’ (6752 & 6753 and tested the 6795-helio x10) and the GPS on all of them works flawlessly without any ‘data connection’.
Please, try one, because you obviously didn’t 😉
And BTW, epo doesn’t work since months…
You’re an idiot.
I challenge you to get a cold lock (turn on device-only GPS after rebooting) with no sim cards, no Wi-Fi data, Google Locations Services turned off completely, and any one of the offline-maps apps, like Sygic or Maps.ME pre-loaded and configured with the offline maps of your location).
Here are the steps for you:
– Using Wi-Fi or whatever, download an offline maps app and download the offline maps for your region.
– Remove sims, turn off Wi-Fi and all other connectivity
– In GPS settings, set mode to device only
– In Google settings app, disable location services (set to Off)
– Get into a car (or public transport, or any shaded area, not necessarily indoors, but a little challenging, hence car / public transpoty is ideal).
– Turn on phone and load your offline maps app
– Wait 100 years for the lock to happen
You’re the one who probably doesn’t know how to truly test the purely device based GPS capabilities of your phone, always leaving a loophole here and there, resulting in some sort an assist which ends up hiding the SoC’s poor capabilities. Typical fanboy.
Actually i do confuse with your step and it look does not make sense.
i have no problem with my gps with MT6752 phone
Makes me salivate seeing these scores, but even the modern MediaTek chips have extremely poor GPS reception – and practically no GPS is used like a navigator device (no mobile data, no SIMs, no Wi-Fi, device only mode with any offline maps app, no EPO files) – under these constraints, MediaTek SoCs basically don’t have GPS.
What a shame, numbers wise, this is my kind of processor.
Yes, sure. That’s why I’ve two ‘modern mtk chips’ (6752 & 6753 and tested the 6795-helio x10) and the GPS on all of them works flawlessly without any ‘data connection’.
Please, try one, because you obviously didn’t 😉
And BTW, epo doesn’t work since months…
You’re an idiot.
I challenge you to get a cold lock (turn on device-only GPS after rebooting) with no sim cards, no Wi-Fi data, Google Locations Services turned off completely, and any one of the offline-maps apps, like Sygic or Maps.ME pre-loaded and configured with the offline maps of your location).
Here are the steps for you:
– Using Wi-Fi or whatever, download an offline maps app and download the offline maps for your region.
– Remove sims, turn off Wi-Fi and all other connectivity
– In GPS settings, set mode to device only
– In Google settings app, disable location services (set to Off)
– Get into a car (or public transport, or any shaded area, not necessarily indoors, but a little challenging, hence car / public transpoty is ideal).
– Turn on phone and load your offline maps app
– Wait 100 years for the lock to happen
You’re the one who probably doesn’t know how to truly test the purely device based GPS capabilities of your phone, always leaving a loophole here and there, resulting in some sort an assist which ends up hiding the SoC’s poor capabilities. Typical fanboy.
i have no problem with my gps with MT6752 phone
Actually i do confuse with your step and it look does not make sense.
If this is true, it would be better than the Snapdragon 820, that’s impressive. Samsung’s to be released Exynos 8890 will probably top even the Helio X20 though.
If this is true, it would be better than the Snapdragon 820, that’s impressive. Samsung’s to be released Exynos 8890 will probably top even the Helio X20 though.
the multicore performances are almost a fake. There are no real app who can get advance from 10 simultaneus cores.
Apart from this top performances are thanks to the 2 A72 cores.
We’ll see 2x A72 cores on Qualcomm SD650
and 4x A72 core on Qualcomm SD652
So I bet these performaces will easily come to the everage 2016-H2 smartphone
the multicore performances are almost a fake. There are no real app who can get advance from 10 simultaneus cores.
Apart from this top performances are thanks to the 2 A72 cores.
We’ll see 2x A72 cores on Qualcomm SD650
and 4x A72 core on Qualcomm SD652
So I bet these performaces will easily come to the everage 2016-H2 smartphone
Pretty useless without decent optimization.
MediaTek just adds extra cores instead reaching the same with less good hardware but good optimization.
Now its just good for benchmarks, but Android won’t utilize it correctly.
Pretty useless without decent optimization.
MediaTek just adds extra cores instead reaching the same with less good hardware but good optimization.
Now its just good for benchmarks, but Android won’t utilize it correctly.
This is even better than the upcoming Galaxy S7!! http://www.sammobile.com/2016/01/21/galaxy-s7-specs-appear-on-geekbench/
This is even better than the upcoming Galaxy S7!! http://www.sammobile.com/2016/01/21/galaxy-s7-specs-appear-on-geekbench/