hash_bucket()

Archive for the ‘oebb’ Category

I’ve been trying all the tricks (and old passwords I can think of) to gain access to my original gmail account, but nothing seems to work. If you know of any method (even if there’s a cost) please share. Would love to recover some old memories and files…

I have no access to the recovery email.

Advertisements

A no-go?

Based on the It’s Not Working On My Machine principle, I conclude that Google Chrome does not support XBAPs. Is this confirmed?

I’m sitting here reading through the comic book (that would be marketing brochure) from Google outlining their new Web Browser project, called Google Chrome. Most of the news sites are abuzz already praising this yet to be released product, calling it “an innovative new open source web browser” that according to arstechnica.com will offer ‘extraordinary’, ‘unprecedented’ and ‘revolutionary’ new features.

It could also be seen as another step in Googles ongoing attempt to monopolize the web, but that would be mixing technology and politics. Let’s do just that :).

As blogoscoped.com writes:

Google is playing this as nicely as possible by open-sourcing things, with perhaps part of the reason to try to defend against monopoly accusations – after all, Google already owns a lot of what’s happening inside the browser, and some may feel owning a browser too could be a little too much power for a single company (Google could, for instance, release browser features that benefit their sites more than most other sites… as can Microsoft with Internet Explorer).

Sure making it Open Source gives any technically highly adept individual the possibility to take it apart and modify it, but the fact that this opportunity will be used by a very small fraction of all the people browsing the web makes it a poor excuse to give up both the browsing experience and the content to the same company.

In fact this goes even further with the announcement of the phishing/malware protection features announced in the same comic (p.33). According to the brochure:

There’s a second list of malware websites. Websites where a ton of bad things might happen to your computer, just on arrival.

When we discover malicious content, we notify the owner of a website, who usually wasn’t intending to be malicious, and they can take this information and clean up their site.

In effect, Google will thus be policing the web on our behalf and notify owners of websites hosting malicious content, so that they can clean up their site. Now, from a practical perspective, this might seem like a nice idea, after all, Google is already watching just about every step you take on-line and all the content you might be accessing, they are most likely event telling you where to go based on your Google searches. But from a political/philosophical perspective the idea of Google playing big brother on-line is not so appealing.

Blogoscoped.com is right , Google already owns most of what’s happening inside the browser, including your email on Gmail and any file you created with Google Apps and saved in Google’s on-line storage. Owning your means of accessing the web, as well as policing its content would seem to spell a little to much power for one, any, single commercial entity…

As for ‘unprecedented’ and ‘revolutionary’, most of the usability features of Chrome has already been around (in some form or other) for some time. Mozilla has the Prism project, the Tab Page is already a part of Opera, and the Incognito privacy mode has been around in Safari for years and will be part of IE8.  As for the OmniBox (that would be the extended address bar) and its auto completion features, FireFox 3 is already doing some very nice and innovative work here as well…

On the other hand, the idea of handling each tab as a separate process is a very neat architectural idea that I am sure will have great impact on performance and stability, perhaps even on security, in a good sense that is. Also, I for one totally disagree with the people claiming that it would be best to have only one single browser, for standards and compatibility. There are many ways of browsing the web, and equally many ways of digesting and organizing online information. More browsers mean more options to choose from in terms of usability, UX and features.

In short, I’m not saying I hate Google or that bringing another browser to the market is such a bad thing. However, I do think it is important to remember that Google is Just Another Profit Oriented Company, that will do what it takes to increase market share and quell their competitors.

Clever slogans like Do No Evil and punch lines like ‘whithout competition we have stagnation’ (p.37) will not change this, though it might dupe some journalists and geeks into thinking the company really is just a big and Friendly Giant…

All in all, it’s just business as usual…

Of course you saw it coming! At least if the name of any of your relatives was Nostradamus…

APPLE + STARBUCKS

From a marketing perspective I’m sure just the mentioning of these to brands in the same breath must make your head spin with all the life-style keywords you can slam onto the ads. The only problem might be that the combination is so hip and right-in-time that it is hard to find a medium where such an ad could fit comfortably in. TV is obviously out of the question, Web 2.0 might be ok but you’d still have to deal with that old 2000-like internet thingy….

If it works out, great! I will happily admit that I’m a fan of both the Ipod and Starbucks. But this ‘mash-up’ seems a little too slippery… Just me being skeptical?

MIX 2007

Posted on: April 20, 2007

Maybe most of you already know, but I’ll be visiting the Mix07 Conference in Las Vegas, USA (April 30th to May 2nd). Mix is a conference that aims to bring together web developers, designers and IT decision makers for a 72h discussion on the future of the web.

mix
While MIX is organized by Microsoft Corp. they are really trying hard to get a good mixture of people that live both in the MS-world and those that do not.

At last years conference I was really surprised at how many non-MS speakers there were, and that those speakers also seemed to be comfortable enough to actually deliver a very sober and unbiased view of MS and their technologies. Many of the new technologies and platforms that Microsoft is unleashing unto the world lately offer knew paradigms for software development and deployment. Thus critique, as well as unbiased appreciation is very important I think.

Last year was a lot of fun and I am looking forward to this years event as well.

I for one find it increasingly difficult to remember all the interesting URL’s that I stumble upon when meandering the web. I am also very bad at actually making use of bookmarks, as I often find that the name of a bookmark fails to give an adequate description of where it points. Instead, I am more and more depending on Google to find me the page I was looking for, by doing a search on the same keywords that made the site stick in my memory in the first place…

Also, with the rapidly decreasing supply of available domain names, a lot of smaller companies or start-ups find it difficult to register a logical domain that reflects the name of their business. Given these two “facts”, the only logical thing to do if course is to register a name that is loosely related to your company, and then inform your audience on what search terms they should use to locate you on Google.

I’m not sure about the rest of the world, but here in Tokyo it has become increasingly common to see advertisements on trains and billboards that simply feature a text-box with a keyword and a button next to it that says “Search”. Some of my favorites include the words “1600yen” and “Eat Fish Now”. I also learnt recently that the car maker Pontiac had done a similar marketing stunt by simply using a screen-shot of Google with their latest car model name in the search box. (Google apparently received no royalty in this case.)

I guess the point is that who needs a dns or even a domain name, when we can just use google, or in other words, is google the new “dns”?

Please observe that this post is entirely speculative and is in no ways based on any methodological research. It is merely posted as an open question.

When thinking about the vast “clusters of in-disposable information” floating around the universe according to Swedish astronomer and philosopher Peter Nilsson, in relation to what is known as the Web, it is sometimes an encouraging thought with just a few clicks and a well formulated search-string, thousands of corporate websites, myriads of forums and communities and countless blogs and personal websites are now within every connected persons reach.

Though maybe not exactly what Engelbart, Bush, Nelson, Kay and the others had in mind, it sure does seem to realize at least some of the basic ideas of the Hyperlinked Knowledge-space, the Augmentation of the human intellect and the Ever ubiquitous computer.

In all its glory and all is good under the heavens, still I for one have grown increasingly lazy in my meandering of the www. Sure enough I use google everyday to look up details, refresh my memory on something long forgotten or simply to cross reference, but I also find, and maybe even more so lately, that the actual sources I turn to most frequently have been reduced in numbers to maybe 8-10 websites. There are a number of reasons for this. One is that my areas of interest are maybe a bit narrow nowadays, mostly related to work. Thus any web search I perform tends to turn up the same 10-15 sites of which I have already disposed some as being no-good, and some that require me to pay money or subscribe to piles of spam. Leaving me of course with a small subset of 5-6 pages that I keep returning to as my main sources for information.

So why is it that in such a vast ocean of resources, I keep to my own little pond of frogs and no longer allow myself to drift with the waves? Well, one thing that I have found very common, at least when it comes to problem solving, specifically in programming, is that as a few sites with large user communities tends to score high in the search engines and thus grow even more popular, it is within these sites that most of the information I find in minor sites and blogs around the web originate. This is perhaps best illustrated with a fake example. (There are real ones as well but forgive for being lazy and not taking the time to look one up right now.)

Say developer A is trying to solve a problem that involves reading non uniform length CSV files with floating point numbers into an array in a bare bone C program. A is an intermediate C programmer and could probably solve this task in about 2-3 hours at most, but for lack of time and inspiration he turns to the web to see if someone has a small piece of code that he can copy and use as it is. After searching the web for a few minutes he stumbles upon Forum X where C developers discuss C programming problems and solutions. Here A posts a question, asking for a small function that would perform his task, then signs off and turns his attention to other parts of his program for a couple of hours. Checking back at Forum X later in the afternoon reveals a helpful reply from Developer B, that happens to sit on just such a function the he himself found on a Blog, copied and used in his own program. A thanks B, copies the code and adds it to his own source code. Sometime later, developer C, faced with the same problem finds the same Forum X, sees no need to look any further, copies the code and for good manners pastes the very same to his weblog for others to enjoy.

As this pattern repeats over and over again, the same little solution keeps getting copied, republished and reused. In a good scenario at least the very first source becomes the starting point of endless variations on the same theme, perhaps not rendering very novel solutions but at least serving as studying material for some. In a bad scenario, the solution simply gets copied and pasted again and again, requiring no understanding of its inner logic or workings on the part of the user. My favourite part is when a web search for a common problem leads to the exact same source code turning up at dozens of locations all accredited to different people…

While I am in no way against the posting, reuse and redistribution of information or solutions, I would also like to pose the question, is there no inherent danger at all in this scenario? Is there no risk of stagnation of novel solutions? What if the original solution has a bug or a flaw in its internals? The above example may be overly simplified for the sake of an argument, still is this not a pattern that can be seen in for example undergraduate scientific reports? Visual designs?

Of course the web also features a sometimes very strict and efficient “peer-review system” leading to the elimination of unreliable sources, correction of errors in proposed solutions or the rewarding of credit where credit is due, but this peer-reviewing is in no way 100% reliable, especially since the number of available source for any popular piece of information tends to grow exponentially on the web…

To sum it up, given that so many solutions (at least in the world of programming) gets copied and republished in endless repetition, is there no risk of innovation in problem solving stagnating, or of aspiring developers not learning to perform the very basic steps of programming because they keep copying and pasting the same code over and over again?

I for one would just like to point out that forums for example are not code repositories, reading blogs cannot replace reading the documentation and if you want to become a good programmer always try to solve your problems and tasks your self before asking for solutions on-line (at least that way you will know how to pose a good question…).

Also, yes do use forums and communities alot! They are a great place for getting feedback to your designs and ideas, looking at how others solved similar issues, making friends and asking questions once you actually do get stuck. But again, they are not code repositories…


.

This blog has no clear focus. It has a focus though, it's just not very clear at the moment...

Dev Env.

Visual Studio 2008 Prof / NUnit / Gallio / csUnit / STools (ExactMagic) / doxygen / dxCore / TypeMock / TestDriven.net / SequenceViz / CLRProfiler / Snoop / Reflector / Mole / FxCop / Subversion / TortoiseSVN / SlikSVN / CruiseControl.net / msbuild / nant

Blog Stats

  • 81,503 hits