Norwegian browser developer Opera Software has confirmed the switch of its browser development to a rapid release cycle with the launch of Opera Next 16. The new version number comes less than a month after Opera 15 FINAL was released, which saw Opera switch from its own proprietary Presto web engine to the Blink engine used by Google Chrome.
As with all rapid release cycle updates, there are no major overhauls to be found in Opera Next 16, although a number of interesting new features have been showcased as the next iteration starts its journey towards final release.
Opera 16 — which is based on Chromium 29, the engine that powers Chrome 29 (currently in beta) — comes with support for the W3C Geolocation API, a form auto-filler tool and opera:flags, a shortcut to settings that allows adventurous users to play with experimental features.
Users will also find a new setting under Browser > Start Page called “Preload Discover contents”, which allows users to switch this feature off.
Platform-specific updates include support for Jump Lists in Windows 7 and 8, plus the addition of Presentation mode to the Mac platform.
In addition to these existing features, Opera has revealed the next set of features it’s working on, with the promise that early versions of these will be rolled out into the Opera Next build over the next few weeks. These include proper bookmarks support, synchronization via Opera Link, improved tab handling and themes.
Opera Next 16 is considered “alpha” software, which is why — like Firefox Aurora — it’s designed to run alongside an existing stable build of Opera, allowing users to experiment with new features without affecting their day-to-day browsing. Updates are frequent as bugs are discovered and fixed, but users should not attempt to rely on Opera Next as their primary browser, hence the separate installation.
* UPDATE: Be sure to read the comment thread at the end of this blog, the discussion got interesting.
It’s been many years since I worked on Direct3D and over the years the technology has evolved Dramatically. Modern GPU hardware has changed tremendously over the years Achieving processing power and capabilities way beyond anything I dreamed of having access to in my lifetime. The evolution of the modern GPU is the result of many fascinating market forces but the one I know best and find most interesting was the influence that Direct3D had on the new generation GPU’s that support Welcome to Thunderbird processing cores, billions of transistors more than the host CPU and are many times faster at most applications. I’ve told a lot of funny stories about how political and Direct3D was created but I would like to document some of the history of how the Direct3D architecture came about and the architecture that had profound influence on modern consumer GPU’s.
Published here with this article is the original documentation for Direct3D DirectX 2 when it was first Introduced in 1995. Contained in this document is an architecture vision for 3D hardware acceleration that was largely responsible for shaping the modern GPU into the incredibly powerful, increasingly ubiquitous consumer general purpose supercomputers we see today.
The reason I got into computer graphics was NOT an interest in gaming, it was an interest in computational simulation of physics. I Studied 3D at Siggraph conferences in the late 1980’s Because I wanted to understand how to approach simulating quantum mechanics, chemistry and biological systems computationally. Simulating light interactions with materials was all the rage at Siggraph back then so I learned 3D. Understanding light 3D mathematics and physics made me a graphics and color expert roomates got me a career in the publishing industry early on creating PostScript RIP’s (Raster Image Processors). I worked with a team of engineers in Cambridge England creating software solutions for printing color graphics screened before the invention of continuous tone printing. That expertise got me recruited by Microsoft in the early 1990’s to re-design the Windows 95 and Windows NT print architecture to be more competitive with Apple’s superior capabilities at that time. My career came full circle back to 3D when, an initiative I started with a few friends to re-design the Windows graphics and media architecture (DirectX) to support real-time gaming and video applications, resulted in gaming becoming hugely strategic to Microsoft. Sony Introduced in a consumer 3D game console (the Playstation 1) and being responsible for DirectX it was incumbent on us to find a 3D solution for Windows as well.
For me, the challenge in formulating a strategy for consumer 3D gaming for Microsoft was an economic one. What approach to consumer 3D Microsoft should take to create a vibrant competitive market for consumer 3D hardware that was both affordable to consumers AND future proof? The complexity of realistically simulating 3D graphics in real time was so far beyond our capabilities in that era that there was NO hope of choosing a solution that was anything short of an ugly hack that would produce “good enough” for 3D games while being very far removed from the ideal solutions mathematically we had implemented a little hope of seeing in the real-world during our careers.
Up until that point only commercial solutions for 3D hardware were for CAD (Computer Aided Design) applications. These solutions worked fine for people who could afford hundred thousand dollars work stations. Although the OpenGL API was the only “standard” for 3D API’s that the market had, it had not been designed with video game applications in mind. For example, texture mapping, an essential technique for producing realistic graphics was not a priority for CAD models roomates needed to be functional, not look cool. Rich dynamic lighting was also important to games but not as important to CAD applications. High precision was far more important to CAD applications than gaming. Most importantly OpenGL was not designed for highly interactive real-time graphics that used off-screen video page buffering to avoid tearing artifacts during rendering. It was not that the OpenGL API could not be adapted to handle these features for gaming, simply that it’s actual market implementation on expensive workstations did not suggest any elegant path to a $ 200 consumer gaming cards.
TRPS15In the early 1990’s computer RAM was very expensive, as such, early 3D consumer hardware designs optimized for minimal RAM requirements. The Sony Playstation 1 optimized for this problem by using a 3D hardware solution that did not rely on a memory intensive the data structure called a Z-buffer, instead they used a polygon level sorting algorithm that produced ugly intersections between moving joints. The “Painters Algorithm” approach to 3D was very fast and required little RAM. It was an ugly but pragmatic approach for gaming that would have been utterly unacceptable for CAD applications.
In formulating the architecture for Direct3D we were faced with difficult choices Similar enumerable. We wanted the Windows graphics leading vendors of the time; ATI, Cirrus, Trident, S3, Matrox and many others to be Able to Compete with one another for rapid innovation in 3D hardware market without creating utter chaos. The technical solution that Microsoft’s OpenGL team espoused via Michael Abrash was a driver called 3DDDI models (3D Device Driver Interface). 3DDDI was a very simple model of a flat driver that just supported the hardware acceleration of 3D rasterization. The complex mathematics associated with transforming and lighting a 3D scene were left to the CPU. 3DDDI used “capability bits” to specify additional hardware rendering features (like filtering) that consumer graphics card makers could optionally implement. The problem with 3DDDI was that it invited problems for game developers out of the gate. There were so many cap-bits every game that would either have to support an innumerable number of feature combinations unspecified hardware to take advantage of every possible way that hardware vendors might choose to design their chips producing an untestable number of possible hardware configurations and a consumer huge amount of redundant art assets that the games would not have to lug around to look good on any given device OR games would revert to using a simple set of common 3D features supported by everyone and there would be NO competitive advantage for companies to support new hardware 3D capabilities that did not have instant market penetration. The OpenGL crowd at Microsoft did not see this as a big problem in their world Because everyone just bought a $ 100,000 workstation that supported everything they needed.
The realization that we could not get what we needed from the OpenGL team was one of the primary could be better we Decided to create a NEW 3D API just for gaming. It had nothing to do with the API, but with the driver architecture underneath Because we needed to create a competitive market that did not result in chaos. In this respect the Direct3D API was not an alternative to the OpenGL API, it was a driver API designed for the sole economic purpose of creating a competitive market for 3D consumer hardware. In other words, the Direct3D API was not shaped by “technical” requirements so much as economic ones. In this respect the Direct3D API was revolutionary in several interesting ways that had nothing to do with the API itself but rather the driver architecture it would rely on.
When we Decided to acquire a 3D team to build with Direct3D I was chartered surveying the market for candidate companies with the right expertise to help us build the API we needed. As I have previously recounted we looked at Epic Games (creators of the Unreal engine), Criterion (later acquired by EA), Argonaut and finally Rendermorphics. We chose Rendermorphics (based in London) Because of the large number of 3D quality engineers and the company employed Because The founder, Servan Kiondijian, had a very clear vision of how consumer 3D drivers should be designed for maximum future compatibility and innovation. The first implementation of the Direct3D API was rudimentary but quickly intervening evolved towards something with much greater future potential.
My principal memory from that period was a meeting in roomates I, as the resident expert on the DirectX 3D team, was asked to choose a handedness for the Direct3D API. I chose a left handed coordinate system, in part out of personal preference. I remember it now Only because it was an arbitrary choice that by the caused no end of grief for years afterwards as all other graphics authoring tools Adopted the right handed coordinate system to the OpenGL standard. At the time nobody knew or believed that a CAD tool like Autodesk would evolve up to become the standard tool for authoring game graphics. Microsoft had acquired Softimage with the intention of displacing the Autodesk and Maya anyway. Whoops …
The early Direct3D HAL (Hardware Abstraction Layer) was designed in an interesting way. It was structured vertically into three stages.
DX 2 HAL
The highest was the most abstract layer transformation layer, the middle layer was dedicated to lighting calculations and the bottom layer was for rasterization of the finally transformed and lit polygons into depth sorted pixels. The idea behind this vertical structure driver was to provide a relatively rigid feature path for hardware vendors to innovate along. They could differentiate their products from one another by designing hardware that accelerated increasingly higher layers of the 3D pipeline resulting in greater performance and realism without incompatibilities or a sprawling matrix of configurations for games to test against art or requiring redundant assets. Since the Direct3D API created by Rendermorphics Provided a “pretty fast” implementation software for any functionality not accelerated by the hardware, game developers could focus on the Direct3D API without worrying about myriad permutations of incompatible hardware 3D capabilities. At least that was the theory. Unfortunately like the 3DDDI driver specification, Direct3D still included capability bits designed to enable hardware features that were not part of the vertical acceleration path. Although I actively objected to the tendency of Direct3D capability to accumulate bits, the team felt extraordinary competitive pressure from Microsoft’s own OpenGL group and from the hardware vendors to support them.
The hardware companies, seeking a competitive advantage for their own products, would threaten to support and promote OpenGL to game developers Because The OpenGL driver bits capability supported models that enabled them to create features for their hardware that nobody else supported. It was common (and still is) for the hardware OEM’s to pay game developers to adopt features of their hardware unique to their products but incompatible with the installed base of gaming hardware, forcing consumers to constantly upgrade their graphics card to play the latest PC games . Game developers alternately hated capability bits Because of their complexity and incompatibilities but wanted to take the marketing dollars from the hardware OEM’s to support “non-standard” 3D features.
Overall I viewed this dynamic as destructive to a healthy PC gaming economy and advocated resisting the trend OpenGL Regardless of what the people wanted or OEM’s. I believed that creating a consistent stable consumer market for PC games was more important than appeasing the hardware OEM’s. As such as I was a strong advocate of the relatively rigid vertical Direct3D pipeline and a proponent of introducing only API features that we expected up to become universal over time. I freely confess that this view implied significant constraints on innovation in other areas and a placed a high burden of market prescience on the Direct3D team.
The result, in my estimation, was pretty good. The Direct3D fixed function pipeline, as it was known, produced a very rich and growing PC gaming market with many healthy competitors through to DirectX 7.0 and the early 2000’s. The PC gaming market boomed and grew to be the largest gaming market on Earth. It also resulted in a very interesting change in the GPU hardware architecture over time.
Had the Direct3D HAL has been a flat driver with just the model for rasterization capability bits as the OpenGL team at Microsoft had advocated, 3D hardware makers would have competed by accelerating just the bottom layer of the 3D rendering pipeline and adding differentiating features to their hardware capability via bits that were incompatible with their competitors. The result of introducing the vertical layered architecture THING that was 3D hardware vendors were all encouraged to add features to their GPU’s more consistent with the general purpose CPU architectures, namely very fast floating point operations, in a consistent way. Thus consumer GPU’s evolved over the years to increasingly resemble general purpose CPU’s … with one major difference. Because the 3D fixed function pipeline was rigid, the Direct3D architecture afforded very little opportunity for code branching frequent as CPU’s are designed to optimize for. Achieved their GPU’s amazing performance and parallelism in part by being free to assume that little or no branching code would ever occur inside a Direct3D graphics pipeline. Thus instead of evolving one giant monolithic core CPU that has massive numbers of transistors dedicated to efficient branch prediction has as an Intel CPU, GPU has a Direct3D Hundreds to Welcome to Thunderbird simple CPU cores like that have no branch prediction. They can chew through a calculation at incredible speed confident in the knowledge that they will not be interrupted by code branching or random memory accesses to slow them down.
DirectX 7.0 up through the underlying parallelism of the GPU was hidden from the game. As far as the game was concerned some hardware was just faster than other hardware but the game should not have to worry about how or why. The early DirectX fixed function pipeline architecture had done a brilliant job of enabling dozens of Disparate competing hardware vendors to all take different approaches to Achieving superior cost and performance in consumer 3D without making a total mess of the PC gaming market for the game developers and consumers . It was not pretty and was not entirely executed with flawless precision but it worked well enough to create an extremely vibrant PC gaming market through to the early 2000’s.
Before I move on to discussing more modern evolution Direct3D, I would like to highlight a few other important ideas that influenced architecture in early modern Direct3D GPU’s. Recalling that in the early to mid 1990’s was relatively expensive RAM there was a lot of emphasis on consumer 3D techniques that conserved on RAM usage. The Talisman architecture roomates I have told many (well-deserved) derogatory stories about was highly influenced by this observation.
Search this blog for tags “Talisman” and “OpenGL” for many stories about the internal political battles over these technologies within Microsoft
Talisman relied on a grab bag of graphics “tricks” to minimize GPU RAM usage that were not very generalized. The Direct3D team, Rendermorphics Heavily influenced by the founders had made a difficult choice in philosophical approach to creating a mass market for consumer 3D graphics. We had Decided to go with a more general purpose Simpler approach to 3D that relied on a very memory intensive a data structure called a Z-buffer to Achieve great looking results. Rendermorphics had managed to Achieve very good 3D performance in pure software with a software Z-buffer in the engine Rendermorphics roomates had given us the confidence to take the bet to go with a more general purpose 3D Simpler API and driver models and trust that the hardware RAM market and prices would eventually catch up. Note however that at the time we were designing Direct3D that we did not know about the Microsoft Research Groups “secret” Talisman project, nor did they expect that a small group of evangelists would cook up a new 3D API standard for gaming and launch it before their own wacky initiative could be deployed. In short one of the big bets that Direct3D made was that the simplicity and elegance of Z-buffers to game development were worth the risk that consumer 3D hardware would struggle to affordably support them early on.
Despite the big bet on Z-buffer support we were intimately aware of two major limitations of the consumer PC architecture that needed to be addressed. The first was that the PC bus was generally very slow and second it was much slower to copy the data from a graphics card than it was to copy the data to a graphics card. What that generally meant was that our API design had to growing niche to send the data in the largest most compact packages possible up to the GPU for processing and absolutely minimize any need to copy the data back from the GPU for further processing on the CPU. This generally meant that the Direct3D API was optimized to package the data up and send it on a one-way trip. This was of course an unfortunate constraint Because there were many brilliant 3D effects that could be best accomplished by mixing the CPU’s branch prediction efficient and robust floating point support with the GPU’s parallel rendering incredible performance.
One of the fascinating Consequences of that constraint was that it forced the GPU’s up to become even more general purpose to compensate for the inability to share the data with the CPU efficiently. This was possibly the opposite of what Intel intended to happen with its limited bus performance, Because Intel was threatened by the idea that the auxiliary would offload more processing cards from their work thereby reducing the CPU’s Intel CPU’s value and central role to PC computing. It was reasonably believed at that time that Intel Deliberately dragged their feet on improving PC performance to deterministic bus a market for alternatives to their CPU’s for consumer media processing applications. Earlier Blogs from my recall that the main REASON for creating DirectX was to Prevent Intel from trying to virtualize all the Windows Media support on the CPU. Intel had Adopted a PC bus architecture that enabled extremely fast access to system RAM shared by auxiliary devices, it is less Likely GPU’s that would have evolved the relatively rich set of branching and floating point operations they support today.
To Overcome the fairly stringent performance limitations of the PC bus a great deal of thought was put into techniques for compressing and streamlining DirectX assets being sent to the GPU performance to minimize bus bandwidth limitations and the need for round trips from the GPU back to the CPU . The early need for the rigid 3D pipeline had Consequences interesting later on when we Began to explore assets streaming 3D over the Internet via modems.
We Recognized early on that support for compressed texture maps would Dramatically improve bus performance and reduce the amount of onboard RAM consumer GPU’s needed, the problem was that no standards Existed for 3D texture formats at the time and knowing how fast image compression technologies were evolving at the time I was loathe to impose a Microsoft specified one “prematurely” on the industry. To Overcome this problem we came up with the idea of ”blind compression formats”. The idea, roomates I believe was captured in one of the many DirectX patents that we filed, had the idea that a GPU could encode and decode image textures in an unspecified format but that the DirectX API’s would allow the application to read and write from them as though they were always raw bitmaps. The Direct3D driver would encode and decode the image data is as Necessary under the hood without the application needing to know about how it was actually being encoded on the hardware.
By 1998 3D chip makers had begun to devise good quality 3D texture formats by DirectX 6.0 such that we were Able to license one of them (from S3) for inclusion with Direct3D.
DirectX 6.0 was actually the first version of DirectX that was included in a consumer OS release (Windows 98). Until that time, DirectX was actually just a family of libraries that were shipped by the Windows games that used them. DirectX was not actually a Windows API until five generations after its first release.
DirectX 7.0 was the last generation of DirectX that relied on the fixed function pipeline we had laid out in DirectX 2.0 with the first introduction of the Direct3D API. This was a very interesting transition period for Direct3D for several could be better;
1) The original founders DirectX team had all moved on,
2) Microsoft’s internal Talisman and could be better for supporting OpenGL had all passed
3) Microsoft had brought the game industry veterans like Seamus Blackley, Kevin Bacchus, Stuart Moulder and others into the company in senior roles.
4) Become a Gaming had a strategic focus for the company
DirectX 8.0 marked a fascinating transition for Direct3D Because with the death of Talisman and the loss of strategic interest in OpenGL 3D support many of the people from these groups came to work on Direct3D. Talisman, OpenGL and game industry veterans all came together to work on Direct3D 8.0. The result was very interesting. Looking back I freely concede that I would not have made the same set of choices that this group made for DirectX 8.0 in chi but it seems to me that everything worked out for the best anyway.
Direct3D 8.0 was influenced in several interesting ways by the market forces of the late 20th century. Microsoft largely unified against OpenGL and found itself competing with the Kronos Group standards committee to advance faster than OpenGL Direct3D. With the death of SGI, control of the OpenGL standard fell into the hands of the 3D hardware OEM’s who of course wanted to use the standard to enable them to create differentiating hardware features from their competitors and to force Microsoft to support 3D features they wanted to promote. The result was the Direct3D and OpenGL Became much more complex and they tended to converge during this period. There was a stagnation in 3D feature adoption by game developers from DirectX 8.0 to DirectX 11.0 through as a result of these changes. Became creating game engines so complex that the market also converged around a few leading search providers Including Epic’s Unreal Engine and the Quake engine from id software.
Had I been working on Direct3D at the time I would have stridently resisted letting the 3D chip lead Microsoft OEM’s around by the nose chasing OpenGL features instead of focusing on enabling game developers and a consistent quality consumer experience. I would have opposed introducing shader support in favor of trying to keep the Direct3D driver layer as vertically integrated as possible to Ensure conformity among hardware vendors feature. I also would have strongly opposed abandoning DirectDraw support as was done in Direct3D 8.0. The 3D guys got out of control and Decided that nobody should need pure 2D API’s once developers Adopted 3D, failing to recognize that simple 2D API’s enabled a tremendous range of features and ease of programming that the majority of developers who were not 3D geniuses could Easily understand and use. Forcing the market to learn 3D Dramatically constrained the set of people with the expertise to adopt it. Microsoft later discovered the error in this decision and re-Introduced DirectDraw as the Direct2D API. Basically letting the Direct3D 8.0 3D design geniuses made it brilliant, powerful and useless to average developers.
At the time that the DirectX 8.0 was being made I was starting my first company WildTangent Inc.. and Ceased to be closely INVOLVED with what was going on with DirectX features, however years later I was Able to get back to my roots and 3D took the time to learn Direct3D programming in DirectX 11.1. Looking back it’s interesting to see how the major architectural changes that were made in DirectX 8 resulted in the massively convoluted and nearly incomprehensible Direct3D API we see today. Remember the 3 stage pipeline DirectX 2 that separated Transformation, lighting and rendering pipeline into three basic stages? Here is a diagram of the modern DirectX 11.1 3D pipeline.
DX 11 Pipeline
Yes, it grew to 9 stages and 13 stages when arguably some of the optional sub-stages, like the compute shader, are included. Speaking as somebody with an extremely lengthy background in very low-level 3D graphics programming and I’m Embarrassed to confess that I struggled mightily to learn programming Direct3D 11.1. Become The API had very nearly incomprehensible and unlearnable. I have no idea how somebody without my extensive background in 3D and graphics could ever begin to learn how to program a modern 3D pipeline. As amazingly powerful and featureful as this pipeline is, it is also damn near unusable by any but a handful of the most antiquated brightest minds in 3D graphics. In the course of catching up on my Direct3D I found myself simultaneously in awe of the astounding power of modern GPU’s and where they were going and in shocked disgust at the absolute mess the 3D pipeline had Become. It was as though the Direct3D API had Become a dumping ground for 3D features that every OEM DEMANDED had over the years.
Had I not enjoyed the benefit of the decade long break from Direct3D involvement Undoubtedly I would have a long history of bitter blogs written about what a mess my predecessors had made of a great and elegant vision for the consumer 3D graphics. Weirdly, however, leaping forward in time to the present day, I am forced to admit that I’m not sure it was such a bad thing after all. The result of stagnation gaming on the PC as a result of the mess Microsoft and the OEMs made of the Direct3D API was a successful XBOX. Having a massively fragmented 3D API is not such a problem if there is only one hardware configuration to support game developers have, as is the case with a game console. Direct3D shader 8.0 support with early primitive was the basis for the first Xbox’s graphics API. For the first selected Microsoft’s XBOX NVIDIA NVIDIA chip giving a huge advantage in the 3D PC chip market. DirectX 9.0, with more advanced shader support, was the basis for the XBOX 360, Microsoft roomates selected for ATI to provide the 3D chip, AMD this time handing a huge advantage in the PC graphics market. In a sense the OEM’s had screwed Themselves. By successfully Influencing Microsoft and the OpenGL standards groups to adopt highly convoluted graphics pipelines to support all of their feature sets, they had forced Themselves to generalize their GPU architectures and the 3D chip market consolidated around a 3D chip architecture … whatever Microsoft selected for its consoles.
The net result was that the retail PC game market largely died. It was simply too costly, too insecure and too unstable a platform for publishing high production value games on any longer, with the partial exception of MMOG’s. Microsoft and the OEM’s had conspired together to kill the proverbial golden goose. No biggie for Microsoft as they were happy to gain complete control of the former PC gaming business by virtue of controlling the XBOX.
From the standpoint of the early DirectX vision, I would have said that this outcome was a foolish, shortsighted disaster. Microsoft had maintained a little discipline and strategic focus on the Direct3D API they could have ensured that there were NO other consoles in existence in a single generation by using the XBOX XBOX to Strengthen the PC gaming market rather than inadvertently destroying it. While Microsoft congratulates itself for the first successful U.S. launch of the console, I would count all the gaming dollars collected by Sony, Nintendo and mobile gaming platforms over the years that might have remained on Microsoft platforms controlled Microsoft had maintained a cohesive strategy across media platforms. I say all of this from a past tense perspective Because, today, I’m not so sure that I’m really all that unhappy with the result.
The new generation of consoles from Sony AND Microsoft have Reverted to a PC architecture! The next generation GPU’s are massively parallel, general-purpose processors with intimate access to the shared memory with the CPU. In fact, the GPU architecture Became so generalized that a new pipeline stage was added in DirectX 11 DirectCompute called that simply allowed the CPU to bypass the entire convoluted Direct3D graphics pipeline in favor of programming the GPU directly. With the introduction of DirectCompute the promise of simple 3D programming returned in an unexpected form. Modern GPU’s have Become so powerful and flexible that the possibility of writing cross 3D GPU engines directly for the GPU without making any use of the traditional 3D pipeline is an increasingly practical and appealing programming option. From my perspective here in the present day, I would anticipate that within a few short generations the need for the traditional Direct3D and OpenGL APIs will vanish in favor of new game engines with much richer and more diverse feature sets that are written entirely in device independent shader languages like Nvidia’s CUDA and Microsoft’s AMP API’s.
Today, as a 3D physics engine and developer I have never been so excited about GPU programming Because of the sheer power and relative ease of programming directly to the modern GPU without needing to master the enormously convoluted 3D pipelines associated with Direct3D and OpenGL API’s. If I were responsible for Direct3D strategy today I would be advocating dumping the investment in traditional 3D pipeline in favor of Rapidly opening direct access to a rich GPU programming environment. I personally never imagined that my early work on Direct3D, would, within a couple decades, Contribute to the evolution of a new kind of ubiquitous processor that enabled the kind of incredibly realistic and general modeling of light and physics that I had learned in the 1980 ‘s but never believed I would see computers powerful enough to models in real-time during my active career.
Software AG today announced it was positioned by Gartner, Inc., a leading industry analyst firm, in the Leaders Quadrant of the recently published Magic Quadrant for On-Premises Application Integration Suites. In gaining this recognition, vendors were evaluated based on completeness of vision and ability to execute. The quadrant evaluates the application integration and SOA project market, which are strategic for Software AG as a vendor of application infrastructure middleware.
“We believe Gartner naming us as a leader with the furthest position on both axes in the Magic Quadrant for On-Premises Application Integration Suites* is a validation of our product innovation, high quality services and strong go to market model,” said Dr. Wolfram Jost, Software AG’s Chief Technology Officer. “Our goal is to continue to deliver the most comprehensive, innovative infrastructure middleware offerings that improve business outcomes of our customers, while enabling them to achieve better agility and drive growth.”
Gartner’s evaluation of Software AG is primarily based on its flagship offering webMethods Suite V9.0. It includes tightly integrated products such as webMethods Integration Server as an Enterprise Service Bus (ESB), Terracotta Universal Messaging for fast asynchronous messaging, webMethods Trading Networks for B2B integration, webMethods BPMS for process orchestrations and monitoring and CentraSite for metadata lifecycle management.
The nexus of four forces – cloud, mobile, social, and big data – are presenting unprecedented new opportunities to innovate and grow the business. With webMethods Suite, organizations can take full advantage of these opportunities by establishing a strong but flexible integration backbone to build new applications. It allows organizations to leverage existing IT investments while managing the proliferation of data, devices, and services resulting from the four forces.
Unlike other solutions in the market, the webMethods Suite is an open, cross platform solution. It delivers capabilities as building blocks that fit together allowing customer implementations to grow as their needs grow. It is also easy to use across all lifecycle stages from design to production, lowering total cost of ownership. Strong lifecycle governance baked into the platform helps companies maximize reuse and align closely with business needs.
Complimentary copies of Gartner’s report are available at www.softwareag.com/recognition.
* Gartner Magic Quadrant for On-Premises Application Integration Suites by Jess Thompson, Yefim V. Natis, Massimo Pezzini, Daniel Sholler, Ross Altman, Kimihiko Iijima, published 27 June 2013.
About the Magic Quadrant
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
About Software AG
Software AG (SOW.F) helps organizations achieve their business objectives faster. The company’s big data, integration and business process technologies enable customers to drive operational efficiency, modernize their systems and optimize processes for smarter decisions and better service. Building on over 40 years of customer-centric innovation, the company is ranked as a leader in 15 market categories, fueled by core product families Adabas and Natural, ARIS, Terracotta and webMethods. Software AG has ca. 5,300 employees in 70 countries and had revenues of €1.05 billion in 2012.
PC Health Boost is registry cleaning software that PC users can use to detect and repair serious computer problems such as screen freezes, slow boot time, and DLL errors. Problems that occur in the main files of the Windows operating system can prevent PC Health Boost from performing a complete service. Uncovering these problems and fixing them is not complicated, but not well-known among many PC users. Boost Software recognized this issue and created an instructional video to help users overcome these serious problems.
The new video explains how Windows installer errors can prevent PC Health Boost from performing a complete service. The video then goes through detailed steps of possible solutions to try to fix the errors. Since Windows installer issues can vary in origin, the video first offers basic solutions and then continues with those that require more steps to complete.
Customers who are unable to fix their Windows Installer errors with the proposed solutions or who encounter other difficulties are directed to contact Boost Software technical support via email or phone for additional assistance.
More information about the features and benefits of PC Health Boost is available on the Boost Software website located at http://www.boostsoftware.com
About Boost Software
Amit Mehta and Peter Dunbar are software engineers and affiliate gurus who founded Boost Software in 2012 to help people with PC performance problems. The company currently offers a suite of PC performance products that includes PC HealthBoost, DriverBoost Pro, and Startup Boost.
We are always looking for advertising and partnership opportunities and anyone wanting to meet with us, or inquire about writing a guest post for the blog may reach us using the contact link provided.
JAKARTA – Toshiba Satellite C40D could be a consideration for those who are looking for a laptop with a budget of less than Rp 5 million. Budget that much for a laptop is indeed impressive entry entry-level segment. Just do not worry, the name of the Toshiba could be a guarantee of the product is far from the predicate Abal Abal.
SPECIFICATIONS of the list handed Toshiba, Satellite C40D not stated clearly underestimated laptop features and performance is poor. Instead, Toshiba has crammed the new product is released in early July with the latest features and technology.
Toshiba provides an opportunity to dissect out the Tribune called as laptop devices in Indonesia to adopt AMD Quad-Core A4 Acceerated Processing Unit (APU) with integrated AMD Radeon graphics card.
Toshiba also provides convenient storage of the data on the laptop’s hard drive. Field capacity up to 500GB, this hard drive can also be designed to absorb shock or impact so that data is not lost.
For example, performance. Toshiba Satellite C40 is running on the Windows 8 operating system Single Language. Surely it’s the operating system will not run comfortably without sustained specifications that qualified innards. APU AMD A4 processor which integrates with the Quad Core AMD Radeon graphics card is so warranty Satellite C40 plus 2 GB of RAM can execute all software and multimedia applications smoothly.
Offal is enough to guarantee the Satellite C40 is powered laptop and rich in graphics which presents a different experience with other products in the market. Satellite C40 feel engrossed invited to work for all computing activities.
NASHUA, N.H., July 11, 2013 /PRNewswire/ — Software start-up SnoopWall announced today that the company has secured a round of funding from the Angel Breakfast Club, one of the oldest investment groups in the country. SnoopWall recently developed an antispyware program that blocks remote eavesdropping. The unique patent-pending technology will be available on laptops, smartphones, and tablets.
“We’re pleased to achieve our first major milestone in the company—acquiring the funds and strategic support needed for SnoopWall to launch,” said Gary Miliefsky, President and Founder of SnoopWall. “It’s an honor to be funded by this prestigious and well recognized angel investment group.”
The Angel Breakfast Club was started in 1976 by the late Mort Goulder. Over the past 30 years, the group has invested in more than 100 companies. The average return-on-investment rate is 29%, a near record for the industry.
Allan Cowen, a leading angel investor and advisor to the company said, “SnoopWall represents another investment opportunity that clearly positions a patent-pending technology that addresses today’s media narrative on mobile security and personal privacy protection. Backing the SnoopWall project early on came with no hesitation given the market need, and perhaps more importantly, knowing the members of the team and those that have advisory roles.”
SnoopWall is offering a free trial version of their program for Android until August 1. Visithttp://www.snoopwall.com/free-version to sign-up for a copy.
SnoopWall is the world’s first counterveillance software company focused on helping consumers and enterprises protect their privacy on all of their computing devices including smartphones, tablets, and laptops.
The Carnegie Mellon University Software Engineering Institute (SEI) has announced the slate of software engineering thought-leaders who will serve as keynote speakers for the Team Software Process (TSP) Symposium 2013. Held in Dallas, Texas, on September 16-19, the TSP Symposium 2013 keynote line-up includes Bill Curtis, senior vice president and chief scientist with Cast Software; Enrique Ibarra, senior vice president of technology of the Mexican Stock Exchange (BMV); and Robert Behler, chief operating officer of the SEI.
The symposium theme, When Software Really Matters, explores the idea that when product quality is critical, high-quality practices are the best way to achieve it.
“When a software system absolutely must work correctly, quality must be built in from the start. A disciplined approach to quality also offers the benefit of lower lifecycle costs. The TSP promotes the application of practices that lead to superior, high-quality products,” said James McHale, TSP Symposium 2013 technical chair. “Our keynote speakers and representatives from industry and government organizations from around the world will share how using TSP helps organizations build quality in from the start when there’s no room for error.”
In addition to the keynote speakers, substantial technical program, and organized networking events, the TSP Symposium 2013 also offers practitioners an in-depth learning opportunity with full-day tutorials on introductory and advanced TSP concepts.
“I am very excited about this year’s lineup of keynote speakers and technical presenters. The symposium should be stimulating with presentations on a broad array of topics related to quality-focused software development. It is also an excellent way for participants to network and exchange diverse ideas about how they have used the PSP/TSP approach to achieve their software quality goals,” said Mark Kasunic, Symposium co-chair.
I’ve been listening to the audiobook of Heart of Darkness this week, read by Kenneth Branagh. It’s fantastic. It also reminds me of some jobs I’ve had in the past.
There’s a great passage in which Marlow requires rivets to repair a ship, but finds that none are available. This, in spite of the fact that the camp he left further upriver is drowning in them. That felt familiar. There’s also a famous passage involving a French warship that’s blindly firing its cannons into the jungles of Africa in hopes of hitting a native camp situated within. I’ve had that job as well. Hopefully I can help you avoid getting yourself into those situations.
There are several really good lists of common traits seen in well-functioning engineering organizations. Most recently, there’s Pamela Fox’s list of What to look for in a software engineering culture. More famous, but somewhat dated at this point, is Joel Spolsky’s Joel Test. I want to talk about signs of teams that you should avoid.
This list is partially inspired by Ralph Peters’ Spotting the Losers: Seven Signs of Non-Competitive States. Of course, such a list is useless if you can’t apply it at the crucial point, when you’re interviewing. I’ve tried to include questions to ask and clues to look for that reveal dysfunction that is deeply baked into an engineering culture.
Preference for process over tools. As engineering teams grow, there are many approaches to coordinating people’s work. Most of them are some combination of process and tools. Git is a tool that enables multiple people to work on the same code base efficiently (most of the time). A team may also design a process around Git — avoiding the use of remote branches, only pushing code that’s ready to deploy to the master branch, or requiring people to use local branches for all of their development. Healthy teams generally try to address their scaling problems with tools, not additional process. Processes are hard to turn into habits, hard to teach to new team members, and often evolve too slowly to keep pace with changing circumstances. Ask your interviewers what their release cycle is like. Ask them how many standing meetings they attend. Look at the company’s job listings, are they hiring a scrum master?
Excessive deference to the leader or worse, founder. Does the group rely on one person to make all of the decisions? Are people afraid to change code the founder wrote? Has the company seen a lot of turnover among the engineering leader’s direct reports? Ask your interviewers how often the company’s coding conventions change. Ask them how much code in the code base has never been rewritten. Ask them what the process is for proposing a change to the technology stack. I have a friend who worked at a growing company where nobody was allowed to introduce coding conventions or libraries that the founding VP of Engineering didn’t understand, even though he hardly wrote any code any more.
Unwillingness to confront technical debt. Do you want to walk into a situation where the team struggles to make progress because they’re coding around all of the hacks they haven’t had time to address? Worse, does the team see you as the person who’s going to clean up all of the messes they’ve been leaving behind? You need to find out whether the team cares about building a sustainable code base. Ask the team how they manage their backlog of bugs. Ask them to tell you about something they’d love to automate if they had time. Is it something that any sensible person would have automated years ago? That’s a bad sign.
Not invented this week syndrome. We talk a lot about “not invented here” syndrome and how it affects the competitiveness of companies. I also worry about companies that lurch from one new technology to the next. Teams should make deliberate decisions about their stack, with an eye on the long term. More importantly, any such decisions should be made in a collaborative fashion, with both developer productivity and operability in mind. Finding out about this is easy. Everybody loves to talk about the latest thing they’re working with.
Disinterest in sustaining a Just Culture. What’s Just Culture? This post by my colleague John Allspaw on blameless post mortems describes it pretty well. Maybe you want to work at a company where people get fired on the spot for screwing up, or yelled at when things go wrong, but I don’t. How do you find out whether a company is like that? Ask about recent outages and gauge whether the person you ask is willing to talk about them openly. Do the people you talk to seem ashamed of their mistakes?
Monoculture. Diversity counts. Gender diversity is really important, but it’s not the only kind of diversity that matters. There’s ethnic diversity, there’s age diversity, and there’s simply the matter of people acting differently, or dressing differently. How homogenous is the group you’ve met? Do they all remind you of you? That’s almost certainly a serious danger sign. You may think it sounds like fun to work with a group of people who you’d happily have as roommates, but monocultures do a great job of masking other types of dysfunction.
Lack of a service-oriented mindset. The biggest professional mistakes I ever made were the result of failing to see that my job was ultimately to serve other people. I was obsessed with building what I thought was great software, and failed to see that what I should have been doing was paying attention to what other people needed from me in order to succeed in their jobs. You can almost never fail when you look for opportunities to be of service and avail yourself of them. Be on the lookout for companies where people get ahead by looking out for themselves. Don’t take those jobs.
There are a lot of ways that a team’s culture can be screwed up, but those are my top seven.
A new Android computer has just been rolled into the market. Computer named the JW-11 also comes with an ARM Cortex A9 Amlogic AMl8726 single-M3 core. In addition, this computer software also supports Android 4.0 or 4.1.
JW-11 is equipped with 1GB of RAM and a built-in internal memory. In addition, the computer also has a 2.5-inch drive bay that allows users to add storage capacity to a large number.
Although marketed with Android OS, the computer also was able to easily use the Linux OS. According to reports, there have been several user’s computer that includes a Linux OS on the JW-11.
Other features available on these computers include an HDMI port, built-in WiFi and USB 2.0 ports. There is also a SDHC card reader on the computer. Regarding prices, the JW-11 priced at 68 USD.
Have conversations remotely utilizing internet connection of course is much more fun if not only through sound, but also through video chat function. Especially if they added functionality to send text-based or image file directly, it is more the application was complete. This is what trying to offer Camfrog Video Chat.
d5-435-camfrogCamfrog Video Chat is the right application to chat with earlier methods. Not only can be used on close relatives only, Camfrog also provide Room or a gathering place for other Camfrog users from around the world. In addition to Windows and Mac OS, this application is also present for the Android-based platform. Use the same method, even on Android you will dipermudahkan with a camera function, unlike a desktop PC where you have to provide a first webcam. Of course the video call function can only be used if you have a front camera on a smartphone or tablet.
You can use Camfrog Video Chat for free. However, some new additional features will be active after using the Pro version of his pay. These features include video chat that is capable of performing simultaneously with more than one account at a time. You can also do video chat with full-screen view to get a bigger picture and clear. Similarly, send files or add text, and interesting effects.