Posted
almost 12 years
ago
大家好,我是比特币充值话费网站Btctele.com的创始人oldfox126
众所周知,国内目前的比特币应用基本上还集中炒币和挖矿两个方面,很少有比特币消费类的应用。但我们也都知道,比特币只有真正流通起来才有价值。
有鉴于此,为改变国内比特币应用稀少的状况,为比特币的流通做出自己的贡献,我们潜心研发了国内第一个【比特币消费类应用】,比特币充值手机话费 www.Btctele.com
... [More]
通过此平台可以直接用比特币给中国移动、联通以及电信手机的话费充值,支持全国话费直充,整个充值过程完全自动化进行,无须人工干预。
Btctele.com从6月15日试运行,刚上线就受到了国内比特币爱好者们的热烈欢迎,充值业务量一直在稳步增长。
很多朋友刚开始都是抱着试试看的态度,试充值了5块钱。发现确实可以正常运行以后,马上就充值了50、100;
更加有朋友一口气充值700块钱……
从网站上线运营以来,不知不觉已经近3个月过去了,感谢大家一直以来的大力支持。
有朋友甚至跟我说,只要Btctele.com这个网站存在,以后充电话费都用它了……
我们也在不断的努力:
2013/6/18:我们开放了全天24小时充值测试(以前只在9:00-23:00之前开放充值)
2013/7/16:我们启用了新的充值渠道,提供更多金额的充值(以前只有50/100两种面额,现在有5/10/20/30/50/100等等面额)
2013/7/20:我们启用了新的网站界面,原旧网站转为手机版m.btctele.com
……
我们希望可以不辜负大家的支持和厚爱,Btctele.com负责任的处理了每一个充值订单。
幸好,到目前为止,我们依然保持着零投诉的成绩^_^
今天,我们的充值网站又要上一个新的台阶了。
因为就在今天,Btctele.com由Xiangfu团队正式接手运营。
Xiangfu 是国内比较早接触BTC的人,维护阿瓦隆矿机固件,参与Cgminer, OpenWrt, Qi Hardware 等开源项目。
我相信在Xiangfu的带领下,Btctele.com将能给大家提供更好的服务。
而我(oldfox126),将慢慢淡出Btctele.com的管理及运营。
任何Btctele.com的相关问题,请通过网址中的方式联系客服:http://www.btctele.com/contact.php
Btctele.com将竭诚为您服务^_^
让我们一起努力,迎来比特币更美好的明天。
感谢大家!
oldfox126
2013/9/13
PS:
Xiangfu团队接手网站以后,第一个措施就是将充值手续费由1%降为0%
也就是说,现在50以上面额的充值手续费都是0%了;50以下的面额,因为充值渠道收费较高,不可能马上降到0%,但依然降低了1个百分点。
有需要充值的朋友可以去看看^_^ 网址是:http://www.btctele.com/
引用自:https://bitcointalk.org/index.php?PHPSESSID=s8c3tt5b4urbcort9h0ufa75h4&topic=293657.0 [Less]
|
Posted
almost 12 years
ago
现在比特币在国内有了一个除交易平台,赌博网站的好去处:Btctele.com。比特币直充手机话费。充值过程只需要1个确认块。
网站报道:
【国内首发】比特币充值手机话费BtcTele.com上线测试比特币爱好者
BtcTele.com:比特币充值手机话费上线测试比特币中文网
国内出现比特币充值手机话费应用海峡比特币网
非常时髦的比特币充值话费应用比特时代
国内推出的BTCTELE话费充值服务比特币资讯网
实际生活中我的第一个比特币应用报告–话费充值风季灵
我第一次花掉了一笔比特币,真的当钱花!goldlyre
有网站支持用比特币充话费今日早报
国内比特币持有者应用较多是手机话费充值人民网
|
Posted
almost 12 years
ago
As of Friday night, I am now on a two month unpaid leave. There are a few reasons I want to do this. It’s getting towards the 3-year point at Mozilla, and that’s usually the sort of time I get itchy feet to try something new. I also think I may have
... [More]
been getting a bit close to burn-out, which is obviously no good. I love my job at Mozilla and I think they’ve spoiled me too much for me to easily work elsewhere even if that wasn’t the case, so that’s one reason to take an extended break.
I still think Mozilla is a great place to work, where there are always opportunities to learn, to expand your horizons and to meet new people. An unfortunate consequence of that, though, is that I think it’s also quite high-stress. Not the kind of obvious stress you get from tight deadlines and other external pressures, but a more subtle, internal stress that you get from constantly striving to keep up and be the best you can be. Mozilla’s big enough now that it’s not uncommon to see people leave, but it does seem that a disproportionate amount of them cite stress or needing time to deal with life issues as part of the reason for moving on. Maybe we need to get better at recognising that, or at encouraging people to take more personal time?
Another reason though, and the primary reason, is that I want to spend some serious time working on creating a game. Those who know me know that I’m quite an avid gamer, and I’ve always had an interest in games development (I even spoke about it at Guadec some years back). Pre-employment, a lot of my spare time was spent developing games. Mostly embarrassingly poor efforts when I view them now, but it’s something I used to be quite passionate about. At some point, I think I decided that I preferred app development to games development, and went down that route. Given that I haven’t really been doing app development since joining Mozilla, it feels like a good time to revisit games development. If you’re interested in hearing about that, you may want to follow this Twitter account. We’ve started already, and I like to think that what we have planned, though very highly influenced by existing games, provides some fun, original twists. Let’s see how this goes [Less]
|
Posted
almost 12 years
ago
In 1974, Enzo Mari published “Autoprogettazione” – a book of plans that can only be described as open source furniture. In it’s final form, the book was sent, for free, to anyone that asked for it. Really it was more of a project than a book
... [More]
, though. The purpose, says Mari, was to teach how to “judge current production with a critical eye”.
How is it possible to change the state of things? This is what I ask myself. How is it possible to accomplish the deconditioning of form as a value rather than as strictly corresponding to contact? The only way I know, in that it belongs to my field experience, is what becomes possible when critical thought is based on practical work. Therefore the only way should be to involve the user of a consumer item in the design and realization of the item design. Only by actually touching the diverse contradictions of the job is it possible to start to be free from such deeply rooted conditioning. But how is it possible to expect such an effort when the production tools are lacking as is, above all, the technical know-how, the technical culture it would take a fairly long time to acquire?
In the autumn months, I plan to adapt one of his table designs for an outdoor workbench. [Less]
|
Posted
almost 12 years
ago
Ever since OK Computer, I’ve loved Radiohead. I so admire how they’ve traveled their own path; and done so with immense commercial successful – selling over 30 million albums. I remember staying up late to support their pay-what-you-want release of
... [More]
“In Rainbows”. And then being inspired as hell when I learned they shot “House of Cards” (2008) using not cameras, but lasers. (The visualization was done using Processing. They even open sourced the data on Google Code!)
This past week Nigel Godrich, their longtime engineer / producer / musician, went after Spotify:
Anyway. Here's one. We're off of spotify.. Can't do that no more man..
Small meaningless rebellion.
— nigel godrich (@nigelgod) July 14, 2013
The music industry is being taken over by the back door.. and if we don't try and make it fair for new music producers and artists…
— nigel godrich (@nigelgod) July 14, 2013
..then the art will suffer. Make no mistake. These are all the same old industry bods trying to get a stranglehold on the delivery system..
— nigel godrich (@nigelgod) July 14, 2013
Streaming is obviously the music distribution model moving forward. I listen to Spotify. I think it’s an amazing product; but I totally agree with Nigel here, that doesn’t make it right for the channel to commodify artists to keep their share prices up.
Something’s got to change. Our industry (tech) is terrible at this sort of thing (music, apps, newspapers, …). I can’t tell you how many times people have told me, “Content is king.” You know what? It’s total bullshit. It’s ludicrous to pretend that ones and zeros are all created equal. Kill-off the ability of the creatives to make a living, and we’ll see how that “content” sounds.
I’m with Radiohead on this one. We need a rebellion. [Less]
|
Posted
almost 12 years
ago
I’ve never really considered myself an unhealthy person. I exercise quite regularly and keep up with a reasonable amount of active hobbies (climbing, squash, tennis). That’s not really lapsed much, except for the time the London Mozilla office wasn’t
... [More]
ready and I worked at home – I think I climbed less during that period. Apparently though, that isn’t enough… After EdgeConf, I noticed in the recording of the session I participated in that I was looking a bit more plump than the mental image I had of myself. I weighed myself, and came to the shocking realisation that I was almost 14 stone (89kg). This put me well into the ‘overweight’ category, and was at least a stone heavier than I thought I was.
I’d long been considering changing my diet. I found Paul Rouget’s post particularly inspiring, and discussing diet with colleagues at various work-weeks had put ideas in my head. You could say that I was somewhat of a diet sceptic; I’d always thought that exercise was the key to maintaining a particular weight, especially cardiovascular exercise, and that with an active lifestyle you could get away with eating what you like. I’ve discovered that, for the most part, this was just plain wrong.
Before I go into the details of what I’ve done over the past 5 months, let me present some data: [Less]
|
Posted
about 12 years
ago
by
[email protected] (zecke)
This is part of a series of blog posts about testing inside the OpenBSC/Osmocom project. In this post I am focusing on our usage of GNU autotest. The GNU autoconf ships with a not well known piece of software. It is called GNU autotest and we will
... [More]
focus about it in this blog post.GNU autotest is a very simple framework/test runner. One needs to define a testsuite and this testsuite will launch test applications and record the exit code, stdout and stderr of the test application. It can diff the output with expected one and fail if it is not matching. Like any of the GNU autotools a log file is kept about the execution of each test. This tool can be nicely integrated with automake's make check and make distcheck. This will execute the testsuite and in case of a test failure fail the build.The way we use it is also quite simple as well. We create a simple application inside the test/testname directory and most of the time just capture the output on stdout. Currently no unit-testing framework is used, instead a simple application is built that is mostly using OSMO_ASSERT to assert the expectations. In case of a failure the application will abort and print a backtrace. This means that in case of a failure the stdout will not not be as expected and the exit code will be wrong as well and the testcase will be marked as FAILED.The following will go through the details of enabling autotest in a project.Enabling GNU autotestThe configure.ac file needs to get a line like this: AC_CONFIG_TESTDIR(tests). It needs to be put after the AC_INIT and AM_INIT_AUTOMAKE directives and make sure AC_OUTPUT lists tests/atlocal. Integrating with the automakeThe next thing is to define a testsuite inside the tests/Makefile.am. This is some boilerplate code that creates the testsuite and makes sure it is invoked as part of the build process. # The `:;' works around a Bash 3.2 bug when the output is not writeable. $(srcdir)/package.m4: $(top_srcdir)/configure.ac :;{ \ echo '# Signature of the current package.' && \ echo 'm4_define([AT_PACKAGE_NAME],' && \ echo ' [$(PACKAGE_NAME)])' &&; \ echo 'm4_define([AT_PACKAGE_TARNAME],' && \ echo ' [$(PACKAGE_TARNAME)])' && \ echo 'm4_define([AT_PACKAGE_VERSION],' && \ echo ' [$(PACKAGE_VERSION)])' && \ echo 'm4_define([AT_PACKAGE_STRING],' && \ echo ' [$(PACKAGE_STRING)])' && \ echo 'm4_define([AT_PACKAGE_BUGREPORT],' && \ echo ' [$(PACKAGE_BUGREPORT)])'; \ echo 'm4_define([AT_PACKAGE_URL],' && \ echo ' [$(PACKAGE_URL)])'; \ } &>'$(srcdir)/package.m4' EXTRA_DIST = testsuite.at $(srcdir)/package.m4 $(TESTSUITE) TESTSUITE = $(srcdir)/testsuite DISTCLEANFILES = atconfig check-local: atconfig $(TESTSUITE) $(SHELL) '$(TESTSUITE)' $(TESTSUITEFLAGS) installcheck-local: atconfig $(TESTSUITE) $(SHELL) '$(TESTSUITE)' AUTOTEST_PATH='$(bindir)' \ $(TESTSUITEFLAGS) clean-local: test ! -f '$(TESTSUITE)' || \ $(SHELL) '$(TESTSUITE)' --clean AUTOM4TE = $(SHELL) $(top_srcdir)/missing --run autom4te AUTOTEST = $(AUTOM4TE) --language=autotest $(TESTSUITE): $(srcdir)/testsuite.at $(srcdir)/package.m4 $(AUTOTEST) -I '$(srcdir)' -o [email protected] [email protected] mv [email protected] $@ Defining a testsuiteThe next part is to define which tests will be executed. One needs to create a testsuite.at file with content like the one below: AT_INIT AT_BANNER([Regression tests.]) AT_SETUP([gsm0408]) AT_KEYWORDS([gsm0408]) cat $abs_srcdir/gsm0408/gsm0408_test.ok > expout AT_CHECK([$abs_top_builddir/tests/gsm0408/gsm0408_test], [], [expout], [ignore]) AT_CLEANUP This will initialize the testsuite, create a banner. The lines between AT_SETUP and AT_CLEANUP represent one testcase. In there we are copying the expected output from the source directory into a file called expout and then inside the AT_CHECK directive we specify what to execute and what to do with the output.Executing a testsuite and dealing with failureThe testsuite will be automatically executed as part of make check and make distcheck. It can also be manually executed by entering the test directory and executing the following. $ make testsuite make: `testsuite' is up to date. $ ./testsuite ## ---------------------------------- ## ## openbsc 0.13.0.60-1249 test suite. ## ## ---------------------------------- ## Regression tests. 1: gsm0408 ok 2: db ok 3: channel ok 4: mgcp ok 5: gprs ok 6: bsc-nat ok 7: bsc-nat-trie ok 8: si ok 9: abis ok ## ------------- ## ## Test results. ## ## ------------- ## All 9 tests were successful. In case of a failure the following information will be printed and can be inspected to understand why things went wrong. ... 2: db FAILED (testsuite.at:13) ... ## ------------- ## ## Test results. ## ## ------------- ## ERROR: All 9 tests were run, 1 failed unexpectedly. ## -------------------------- ## ## testsuite.log was created. ## ## -------------------------- ## Please send `tests/testsuite.log' and all information you think might help: To: Subject: [openbsc 0.13.0.60-1249] testsuite: 2 failed You may investigate any problem if you feel able to do so, in which case the test suite provides a good starting point. Its output may be found below `tests/testsuite.dir'. You can go to tests/testsuite.dir and have a look at the failing tests. For each failing test there will be one directory that contains a log file about the run and the output of the application. We are using GNU autotest in libosmocore, libosmo-abis, libosmo-sccp, OpenBSC, osmo-bts and cellmgr_ng. [Less]
|
Posted
about 12 years
ago
Most people will have seen the “Call Me Maybe” series (so named for the song by Jepsen) of blog posts about data loss in the face of network partition. Midway through the last post in the series is what is almost an off-the-cuff comment, but I think
... [More]
it’s everything:
“Consistency is a property of your data, not of your nodes.”
We tend to get overwhelmed with replication configurations, high-availability solutions, sharding strategies, and worrying about how a given database will react under various failure modes.
And yet, the essential truth that we’re so busy worrying about what’s stored on disk that we forget that we don’t care about consistency of what’s on disk. We need to care about the consistency of our data. It’s easy for a misbehaving program to write garbage, but not to worry! we’re absolutely certain that garbage is consistently replicated across the cluster. Yeah, well done there.
So the much bigger challenge in high-availability distributed systems, is making sure we have sane rules for propagating changes so that we can have a safe view of our data.
About 10 years ago I was working with a Java based object-oriented database (which is a grandiose name for what was as much a disk-backed datastore as anything else, but if you’re morbidly curious about what sort of API such a beast would have, you can read about db4o in a series of posts I wrote about it). It was surprisingly easy to use, and came along at a time when I was prepared to do just about anything to escape the object-relational mapping hell.
They got significant adoption in embedded devices where zero-administration is a necessity and where developers don’t want to deal with the machinery of a full scale RDMBS just to store e.g. configuration parameters. But surprise, it wasn’t long before users started asking for replication features. Now, usually when you hear that term you think of master/slave replication being done at database engine level in a high-availability setup. In this case, however, they had disconnected devices re-establishing connectivity to enterprise datastores, and because of that you had to cope with significant conflicts when it came time to synchronize.
Because the data model was articulated in terms of Java code (to a naive first approximation, you were just storing Java objects), you had the data model living in the same place as the application code, domain layer, and validation logic. This meant that when it came time to cope with those conflicts, the natural place to put do that was in the same Java code. This was interesting, because for just about every other database engine out there data is opaque. Oh, sure, RDBMS have types (though that there are people who think VARCHAR(256) actually tells you anything useful remains a source of wonder; alas, I digress), but if you have a high availability configuration and you’ve allowed concurrent activity during a network partition, then you have to deal with diverged replicas and thus have to merge them. Database doesn’t know what to do; how could it? No: consistency is a property of your data, not the datastore; the rules to decide how to synchronize are a business decision, so where better to put it than in the business logic?
Peter Miller suggests the example of booking flights: multiple passengers can end up allocated the same seat on an oversold flight, but the decision about who gets which seat happens at check-in and conflict resolution is a business one made by the airline staff, not the database.
Throughout the Jepsen posts, you’ll see occasional mention of “CRDTs” as an alternative to the problems of attempting to achieve simultaneous write safety in a distributed system. Finding out just what a CRDT is took a bit more doing that I would have expected; hence wanting to write this post.
Convergent and Commutative Replicated Data Types
It’s easy to have Consistency when you impose synchronous access to your data. But the locks needed to give that property don’t scale to distributed systems; you need to have data that can cope with delay. The idea of self-healing systems have been around for a while, but there hasn’t been much formal study of what data types meet these requirements. If you’re at all interested, I’d encourage you to have a read of “A comprehensive study of Convergent and Commutative Replicated Data Types” by Shapiro, Preguiça, Baquero, and Zawirski.
http://hal.inria.fr/docs/00/55/55/88/PDF/techreport.pdf
They use set notation and a form of psuedocode to describe the different data types which all makes the read a bit more serious than it needs to be, but having had my head buried in this paper for a few days I can say the effort has paid off. They articulate a set of conditions that would make either a state based system able to handle merges — which basically works out because the requirement is for the datatype to be a join semilatice; if it is, then they show the replicas will converge — or an operation based one (aka command pattern to us programmer types) — where the requirement is for manipulations of the datatype to be commutative, and if so, ditto [They also show these are equivalent, which is handy].
Here’s an schematic illustration of a state-based convergent replicated data type, from the paper:
The idea being that if you have a merge function, then it doesn’t matter where a state change is made; it will eventually make its way to all replicas.
Which raises the topic of eventual consistency. Anyone who has worked with Amazon S3 has discovered (the hard way, inevitably) that mutating an existing value has wildly undefined behaviour as to when other readers will see that change. CRDTs, on the other hand, exhibit “strong eventual consistency” (or perhaps better “strong eventual convergence”, as Murat Demirbas put his analysis of the topic), whereby the propagation behaviour is well defined.
The surface area you can use one of these data types on is limited. Because the data type is neither synchronous nor is a consensus protocol used to maintain the appearance of a single entity you cannot by definition have a global invariant. So you can track all the additions and subtractions to an integer (summing the like and dislike clicks on a page, for example); addition commutes and eventually all the operations will end up being applied to all the replicas. What you can’t do is something like enforce that the variable never goes below zero (an account balance, say) because two machines with the value at 1 could simultaneously apply a -1 operation, breaking the invariant once that operation propagates. If this seems a bit hypothetical, consider the well documented shopping cart problem encountered by a certain major global online bookseller: delete a book from your cart, and sure enough, five minutes later it’s back again. Classic case of the failure mode encountered by distributed key-value stores.
At first you’d think that this limitation would seriously cramp your style or that there wouldn’t be any real world data types that meet these requirements, but it turns out there are. The significant contribution of the paper is they come up with a formal definition of what a CRDT would need to look like, then explore around a bit and show a number of different datatypes that do meet the requirements.
The paper also includes an impressive reference list & discussion of prior art in the space, so it’s worth a read. There’s also “Conflict-free Replicated Data Types” by the same authors which formalizes SEC.
http://pagesperso-systeme.lip6.fr/Marc.Shapiro/papers/CRDTs_SSS-2011.pdf
Back to the effect of network partitions on data safety:
What about Ceph?
Good question.
What I would be interested in now is how Ceph‘s various inter-related pieces hold up in the face of the sort of aggressive network partition testing conducted in the Jepsen survey. Reading a recent blog article about how the Ceph monitor services have re-implemented their use of Paxis struck me as being extraordinarily complicated. “One Paxos to rule them all”? Oh dear.
I’m doing a back-of-the-envelope examination but I think I already know the answer: you’re not going to get a write acknowledged until it is durably stored — which is Consistency. Ceph is a complex system, and parts of it can be offline when others are continuing to provide service. So you’d have to break it down to the provision of a single piece of mutable data before you could study the Availability of the system properly. I’d love to find someone who would like us do a real analysis using the Jepsen techniques; be interesting to see.
But this all reminds us why we’re interested in CRDTs in the first place: systems where you can build synchronous communication (or an external appearance thereof care of the use of consensus protocols internally) to achieve Consistency are in essence limited to highly controlled clusters in an individual data center. Most real world systems involve components distributed across geographic, temporal, and logical distances, and that means you must take into account the limitations of the speed of information propagation. While most people immediately think about the light-speed problem, it applies just as much to any distributed environment; and in any real world information system we need to serve clients concurrently, and that means the technique of using a CRDT where possible might very well be worth the effort.
AfC [Less]
|
Posted
about 12 years
ago
When last time I was in Cambridge we had a discussion about ARM processors. Paweł used term “ARMology” then. And with recent announcement of Cortex-A12 cpu core I thought that it may be a good idea to write a blog post about it.
Please note that my
... [More]
knowledge of ARM processors started in 2003 so I can make mistakes in everything older. Tried to understand articles about old times but sometimes they do not keep one version of story.
Ancient times
ARM1 got released in 1985 as CPU add-on to BBC Micro manufactured by Acorn Computers Ltd. as result of few years of research work. They wanted to have new processor to replace ageing 6502 used in BBC Micro and Acorn Electron and none of existing ones did not fit their requirements. Note that it was not market product but rather development tool made available for selected users.
But it was ARM2 which landed in new computers — Acorn Archimedes (1987 year). Had multiply instructions added so new version of instruction set was created: ARMv2. Just 8MHz clock but remember that it was first computer with new CPU…
Then ARM3 came — with cache controller integrated and 25MHz clock. ISA was bumped to ARMv2a due to SWP instruction added. And it was released in another Acorn computer: A5000. This was also used in Acorn A4 which was first ARM powered laptop (but term “ARM Powered” was created few years later). I hope that one day I will be able to play with all those old machines…
There was also ARM250 processor with ARMv2a instruction set like in ARM3 but no cache controller. But it is worth mentioning as it can be seen as first SoC due to ARM, MEMC, VIDC, IOC chips integrated in one piece of silicon. This allowed to create budget versions of computers.
ARM Ltd.
In 1990 Acorn, Apple and VLSI co-founded Advanced RISC Machines Ltd. company which took over research and development of ARM processors. Their business model was simple: “we work on cpu cores and other companies pay us license costs to make chips”.
Their first cpu was ARM60 with new instruction set: ARMv3. It had 32bit address space (compared to 26bit in older versions), was endian agnostic (so both big and little endian was possible) and there were other improvements.
Please note lack of ARM4 and ARM5 processors. I heard some rumours about that but will not repeat them here as some of them just do not fit when compared against facts.
ARM610 was powering Apple Newton PDA and first Acorn RiscPC machines where it was replaced by ARM710 (still ARMv3 instruction set but ~30% faster).
First licensees
You can create new processor cores but someone has to buy them and manufacture… In 1992 GEC Plessey and Sharp licensed ARM technology, next year added Cirrus Logic and Texas Instruments, then AKM (Asahi Kasei Microsystems) and Samsung joined in 1994 and then others…
From that list I recognize only Cirrus Logic (used their crazy EP93xx family), TI and Samsung as vendors of processors ;D
Thumb
One of next cpu cores was ARM7TDMI (Thumb+Debug+Multiplier+ICE) which added new instruction set: Thumb.
The Thumb instructions were not only to improve code density, but also to bring the power of the ARM into cheaper devices which may primarily only have a 16 bit datapath on the circuit board (for 32 bit paths are costlier). When in Thumb mode, the processor executes Thumb instructions. While most of these instructions directly map onto normal ARM instructions, the space saving is by reducing the number of options and possibilities available — for example, conditional execution is lost, only branches can be conditional. Fewer registers can be directly accessed in many instructions, etc. However, given all of this, good Thumb code can perform extremely well in a 16 bit world (as each instruction is a 16 bit entity and can be loaded directly).
ARM7TDMI landed nearly everywhere – MP3 players, cell phones, microwaves and any place where microcontroller could be used. I heard that few years ago half of ARM Ltd. income was from license costs of this cpu core…
ARM7
But ARM7 did not ended at ARM7TDMI… There was ARM7EJ-S core which used ARMv5TE instruction set and also ARM720T and ARM740T with ARMv4T. You can run Linux on Cirrus Logic CLPS711x/EP721x/EP731x ones ;)
According to ARM Ltd. page about ARM7 the ARM7 family is the world’s most widely used 32-bit embedded processor family, with more than 170 silicon licensees and over 10 Billion units shipped since its introduction in 1994.
ARM8
I heard that ARM8 is one of those things you should not ask ARM Ltd. people about. Nothing strange when you look at history…
ARM810 processor made use of ARMv4 instruction set and had 72MHz clock. At same time DEC released StrongARM with 200MHz clock… 1996 was definitively year of StrongARM.
In 2004 I bought my first Linux/ARM powered device: Sharp Zaurus SL-5500.
ARM9
Ah ARM9… this was huge family of processor cores…
ARM moved from a von Neumann architecture (Princeton architecture) to a Harvard architecture with separate instruction and data buses (and caches), significantly increasing its potential speed.
There were two different instruction sets used in this family: ARMv4T and ARMv5TE. Also some kind of Java support was added in the latter one but who knows how to use it — ARM keeps details of Jazelle behind doors which can be open only with huge amount of money.
ARMv4T
Here we have ARM9TDMI, ARM920T, ARM922T, ARM925T and ARM940T cores. I mostly saw 920T one in far too many chips.
My collection includes:
ep93xx from Cirrus Logic (with their sick VFP unit)
omap1510 from Texas Instruments
s3c2410 from Samsung (note that some s3c2xxx processors are ARMv5T)
ARMv5T
Note: by ARMv5T I mean every cpu never mind which extensions it has built-in (Enhanced DSP, Jazelle etc).
I consider this one to be most popular one (probably after ARM7TDMI). Countless companies had own processors based on those cores (mostly on ARM926EJ-S one). You can get them even in QFP form so hand soldering is possible. CPU frequency goes over 1GHz with Kirkwood cores from Marvell.
In my collection I have:
at91sam9263 from Atmel
pxa255 from Intel
st88n15 from ST Microelectronics
Had also at91sam9m10, Kirkwood based Sheevaplug and ixp425 based NSLU2 but they found new home.
ARM10
Another quiet moment in ARM history. ARM1020E, ARM1022E, ARM1026EJ-S cores existed but did not looked popular.
UPDATE: Conexant uses ARM10 core in their next generation DSL CPE systems such as bridge/routers, wireless DSL routers and DSL VoIP IADs.
ARM11
Released in 2002 as four new cores: ARM1136J, ARM1156T2, ARM1176JZ and ARM11 MPCore. Several improvements over ARM9 family including optional VFP unit. New instruction set: ARMv6 (and ARMv6K extensions). There was also Thumb2 support in arm1156 core (but I do not know did someone made chips with it). arm1176 core got TrustZone support.
I have:
omap2430 from Texas Instruments
i.mx35 from Freescale
Currently most popular chip with this family is BCM2835 GPU which got arm1136 cpu core on die because there was some space left and none of Cortex-A processor core fit there.
Cortex
New family of processor cores was announced in 2004 with Cortex-M3 as first cpu. There are three branches:
Aplication
Realtime
Microcontroller
All of them (with exception of Cortex-M0 which is ARMv6) use new instruction sets: ARMv7 and Thumb-2 (some from R/M lines are Thumb-2 only). Several cpu modules were announced (some with newer cores):
NEON for SIMD operations
VFP3 and VFP4
Jazelle RCT (aka ThumbEE).
LPAE for more then 4GB ram support (Cortex A7/12/15)
virtualization support (A7/12/15)
big.LITTLE
TrustZone
I will not cover R/M lines as did not played with them.
Cortex-A8
Announced in 2006 single core ARMv7a processor core. Released in chips by Texas Instruments, Samsung, Allwinner, Apple, Freescale, Rockchip and probably few others.
Has higher clocks than ARM11 cores and achieves roughly twice the instructions executed per clock cycle due to dual-issue superscalar design.
So far collected:
am3358 from Texas Instruments
i.mx515 from Freescale
omap3530 from Texas Instruments
Cortex-A9
First multiple core design in Cortex family. Allows up to 4 cores in one processor. Announced in 2007. Looks like most of companies which had previous cores licensed also this one but there were also new vendors.
There are also single core Cortex-A9 processors on a market.
I have products based on omap4430 from Texas Instruments and Tegra3 from NVidia.
Cortex-A5
Announced around the end of 2009 (I remember discussion about something new from ARM with someone at ELC/E). Up to 4 cores, mostly for use in all designs where ARM9 and ARM11 cores were used. In other words new low-end cpu with modern instruction set.
Cortex-A15
The fastest (so far) core in ARMv7a part of Cortex family. Up to 4 cores. Announced in 2010 and expanded ARM line with several new things:
40-bit LPAE which extends address range to 1TB (but 32-bit per process)
VFPv4
Hardware virtualization support
TrustZone security extensions
I have Chromebook with Exynos5250 cpu and have to admit that it is best device for ARM software development. Fast, portable and hackable.
Cortex-A7
Announced in 2011. Younger brother of Cortex-A15 design. Slower but eats much less power.
Cortex-A12
Announced in 2013 as modern replacement for Cortex-A9 designs. Has everything from Cortex-A15/A7 and is ~40% faster than Cortex-A9 at same clock frequency. No chips on a market yet.
big.LITTLE
That’s interesting part which was announced in 2011. It is not new core but combination of them. Vendor can mix Cortex-A7/12/15 cores to have kind of dual-multicore processor which runs different cores for different needs. For example normal operation on A7 to save energy but go up for A15 when more processing power is needed. And amount of cores in each of them does not even have to match.
It is also possible to make use of all cores all together which may result in 8-core ARM processor scheduling tasks on different cpu cores.
There are few implementations already: ARM TC2 testing platform, HiSilicon K3V3, Samsung Exynos 5 Octa and Renesas Mobile MP6530 were announced. They differ in amount of cores but all (except TC2) use the same amount of A7/A15 cores.
ARMv8
In 2011 ARM announced new 64-bit architecture called AArch64. There will be two cores: Cortex-A53 and Cortex-A57 and big.LITTLE combination will be possible as well.
Lot of things got changed here. VFP and NEON are parts of standard. Lot of work went into making sure that all designs will not be so fragmented like 32-bit architecture is.
I worked on AArch64 bootstrapping in OpenEmbedded build system and did also porting of several applications.
Hope to see hardware in 2014 with possibility to play with it to check how it will play compared to current systems.
Other designs
ARM Ltd. is not the only company which releases new cpu cores. That’s due to fact that there are few types of license you can buy. Most vendors just buy licence for existing core and make use of it in their designs. But some companies (Intel, Marvell, Qualcomm, Microsoft, Apple, Faraday and others) paid for ‘architectural license’ which allows to design own cores.
XScale
Probably oldest one was StrongARM made by DEC, later sold to Intel where it was used as a base for XScale family with ARMv5TEJ instruction set. Later IWMMXT got added in PXA27x line.
In 2006 Intel sold whole ARM line to Marvell which released newer processor lines and later moved to own designs.
There were few lines in this family:
Application Processors (with the prefix PXA).
I/O Processors (with the prefix IOP)
Network Processors (with the prefix IXP)
Control Plane Processors (with the prefix IXC).
Consumer Electronics Processors (with the prefix CE).
One day I will undust my Sharp Zaurus c760 just to check how recent kernels work on PXA255 ;D
Marvell
Their Feroceon/PJ1/PJ4 cores were independent ARMv5TE implementations. Feroceon was Marvell’s own ARM9 compatible CPU in Kirkwood and others, while PJ1 was based on that and replaced XScale in later PXA chips. PJ4 is the ARMv7 compatible version used in all modern Marvell designs, both the embedded and the PXA side.
Qualcomm
Company known mostly from wireless networks (GSM/CDMA/3G) released first ARM based processors in 2007. First ones were based on ARM11 core (ARMv6 instruction set) and in next year also ARMv7a were available. Their high-end designs (Scorpion and Krait) are similar to Cortex family but have different performance. Company also has Cortex-A5 and A7 in low-end products.
Nexus 4 uses Snapdragon S4 Pro and I also have S4 Plus based Snapdragon development board.
Faraday
Faraday Technology Corporation released own processors which used ARMv4 instruction set (ARMv5TE in newer cores). They were FA510, FA526, FA626 for v4 and FA606TE, FA626TE, FMP626TE and FA726TE for v5te. Note that FMP626TE is dual core!
They also have license for Cortex-A5 and A9 cores.
Project Denver
Quoting Wikipedia article about Project Denver:
Project Denver is an ARM architecture CPU being designed by Nvidia, targeted at personal computers, servers, and supercomputers. The CPU package will include an Nvidia GPU on-chip.
The existence of Project Denver was revealed at the 2011 Consumer Electronics Show. In a March 4, 2011 Q&A article CEO Jen-Hsun Huang revealed that Project Denver is a five year 64-bit ARM architecture CPU development on which hundreds of engineers had already worked for three and half years and which also has 32-bit ARM architecture backward compatibility.
The Project Denver CPU may internally translate the ARM instructions to an internal instruction set, using firmware in the CPU.
X-Gene
AppliedMicro announced that they will release AArch64 processors based on own cores.
Final note
If you spotted any mistakes please write in comments and I will do my best to fix them. If you have something interesting to add also please do a comment.
I used several sources to collect data for this post. Wikipedia articles helped me with details about Acorn products and ARM listings. ARM infocenter provided other information. Dates were taken from Wikipedia or ARM Company Milestones page. Ancient times part based on The ARM Family and The history of the ARM CPU articles. The history of the ARM architecture was interesting and helpful as well.
Please do not copy this article without providing author information. Took me quite long time to finish it.
Changelog
8 June evening
Thanks to notes from Arnd Bergmann I did some changes:
added ARM7, Marvell, Faraday, Project Denver, X-Gene sections
fixed Cortex-A5 to be up to 4 cores instead of single.
mentioned Conexant in ARM10 section.
improved Qualcomm section to mention which cores are original ARM ones, which are modified.
David Alan Gilbert mentioned that ARM1 was not freely available on a market. Added note about it.
Related content:
Samsung will have big.LITTLE. So what?
What interest me in ARM world
Death to Raspberry/Pi — Beaglebone Black is on a market
Calxeda announced ARM server product
Speeding up BitBake builds
All rights reserved © Marcin Juszkiewicz
ARMology was originally posted on Marcin Juszkiewicz website
[Less]
|
Posted
about 12 years
ago
The LinuxTag 2013 is over, and I want to share some brief impressions I got during our stay in Berlin.
The LinuxTag is a nice and well organized FOSS exhibition in Germany, attracting more than 10.000 visitors during 4 days.
We gave a talk about
... [More]
the OpenPhoenux project at the 2nd evening and had about 60 listeners. Some of them got very interested and followed us to the booth afterwards. For everyone who couldn’t participate, the slides are available online: Slides.pdf
We shared a booth with some other “Linux & Embedded” projects, namely: OpenEmbedded, Ethernut, Nut/OS, Oswald/Metawatch. Our Booth was professionally looking and I think we got quite some people interested in the project. Basically we had a constant flow of people at the booth during our 3 days stay and the overall feedback was rather positive!
We got interviewed by the “GNU funzt!” team, as well. The (german) video is now available on Youtube (OpenPhoenux interview is starting at 5:00):
All in all it was a very nice stay in Berlin. I especially enjoyed meeting and chatting with guys who already owned a GTA04. It looks like the community is growing again!
Links:
OpenPhoenux Project
GTA04 Board
[Less]
|