Some time ago I bought two SG560 firewalls from SecureComputing/SnapGear. These two inexpensive boxes could do almost everything, VPN, NAT, Routing, DNS Proxy, multiple IP addresses, etc., etc. But alas - SecureComputing was acquired by their competitor McAfee. After one firmware upgrade they announced that the products would be discontinued.
Now, I still had two boxes - but one of them turned bad and started dropping packets and freezing every now and then. The other box was traded away for a good coffee machine a few years back so I had to invest in a new firewall for the office.
Everyone at the office were getting more and more used to having a VPN connection and the semi-defect SG560 had over a period made it very clear that stability was very important. For this reason the choice was rather easy - I would do something I never do... I would go for the "Cover my ass solution". If you buy something from Cisco - no one can blame you if the damn thing doesn't work. I decided that the ASA5500 was a good choice.
The price was around 3500DKK - it was the one with a limit on 10 VPN users which was sufficient for now. And if the business should grow beyond that we could just give Cisco an additional 3000DKK and the box would suddenly be able to handle 10 more VPN users. Coming from an open source world this license lock-down was a strange encounter - but ... I had a "Cover my ass" decision to focus on so I disregarded that for now.
The box arrived and it did not look like much - but it carried a nice logo representing the cover of my ass: "Cisco". Now everything would work out and everyone in the office would nod politely - indicating that they knew I "saved their day".
Now, I have setup more than 20 different routers, firewalls, etc. to enable local networks access to the Internet, get redundant connections, VPNs, Port forwarding, routing, etc. etc. So when I saw the box I thought "no problem, this was a very expensive unit - it must be even better than anything I have ever seen".
But alas, this was not the case. Rather than getting a great firewall with all the configurations we needed including VPN etc. I found the missing piece for a puzzle I have not been able to solve for a very long time: Why the hell have I seen so many "Cisco"-certifications in potential employees CVs.
Now, having been able to configure almost all the routers, firewall, etc. I have ever encountered I simply could not figure out why a certification from Cisco would ever be necessary. But struggling with this ASA5500 it all became clear, here are my findings:
1) Instead of having a HTML based interface to configure the ASA5500 it had a java webstart based configuration client called ASDM. This in it self is not a bad thing, but it does seem like overkill considering that this box can only do networking related things. It also takes away the option of using tabs to have multiple configuration pages open at the same time which often proves valuable when some part of the configuration depend on an other.
2) When ever I wanted to do something simple (like add a port forwarding rule) I was met with a dialog where nothing was named the way I expected. First of all they have some strange concepts about everything being reversed. Who would have thought that the "source" and "destination" of a NAT rule would be reversed? It took me well over an hour before the office could access the Internet again.
3) We also have a guest network - this is normally very simple you just create an other LAN on some of the switch-ports and setup some security parameters. No, no, no it does not work this way with Cisco. First of all getting the switch ports free was done in some other place. And when I finally got them released I found out that even though 10 VPN users was enough - the damn thing came with 8 switch ports, but could only create one WAN, one DMZ, and one LAN. Actually I was able to create an other LAN but then it was not allowed any traffic in or out which would make our guest network quite a joke.
4) The box is capable of doing 3 kinds of VPN: IPSec, Clientless SSL, and Easy VPN. The only one which would satisfy our needs was the IPSec and even though there was a nice wizard it was not possible to setup a working IPSec. After spending lots of hours spanning multiple days I gave up. I managed to get the connection established between client and ASA5500 but could not grasp the concepts of how the security was to be setup such that I could get access to the LAN. Now, I understand I have limitations so maybe it was just me. I appointed our systems administrator who has quite a lot of experience in setting all kind of strange and exotic network structures up, and teamed him up with our technical supporter. Those two wasted a few hours struggling with all 3 kinds of VPN and were not even able to get a working connection to the box.
5) So, why did we not just read the documentation! Let me tell you why, on almost all of the dialogs and wizards we could find within this ASDM software there was a help button. When I first pressed one of those I thought "Excellent, now I just need to read a little and I will understand" but here is how the documentation is structured. First of all: the pages correspond to the configuration structure one-to-one, so if I cant find a particular configuration in the ASDM I cannot find it in the documentation. Second: each page of the documentation contains from top to bottom explanations of the dialog that it describes in this form: "Put the source IP in the source IP input field", "Put the source service in the source service input field", and so on. WTF! now I can never write RTFM to anyone.
6) Okay, I did manage to get Internet access working so I could just search and find! But no, no one as much as mentions this ASDM application anywhere. All the help I could find was commands that I should fire at the damn thing from a command line. Now I don't mind that at all, but most all of these examples did not work because they were for the ASAXXXX where XXXX != 5500. And when I finally found some examples they were deprecated because the firmware had been updated.
7) And then in my quite evening time writing this I found that my CPU was under 100% load... writing on my blog? No the ASDM had found a way of turning on the fan in my laptop. It was resting quietly on a configuration page but using 100% CPU.
You might already have guessed it. By having a "Cover your ass"-brand you can persuade CIO's and CTO's all over the world to buy your stupid products. But that is not enough, you can also make those products so impossible to work with that the poor employees of the ass-covering C(I|T)O will need some very extensive education - and Cisco can help them with that, for a small monetary contribution. Now, not only will the poor employee have his brains twisted to cope with the oddities of those ridiculous products he will also get a certificate that he can put in his CV. And without knowing it he is marketing the damn thing to all the potential employers (ass-covering C(I|T)O's) so when they need new hardware they cannot help themselves - their choice will be Cisco.
Here is the model that most other vendors use: Make an inexpensive, good product and keep it self-explanatory for the most part.
Now the later just isn't very cunning - but one of these products is being packed and shipped to our office while I am writing this. And even though the ASA5500 was twice as expensive I am going to remove it from my ass, place it with the rest of the garbage, and get my office back on track.
onsdag den 25. maj 2011
onsdag den 22. juli 2009
xorg.conf revisited
A long time ago we bought a cheap LCD for our living room. A long time ago I found a way to configure one of the numerous Linux-boxes that was attached to it such that it would run with a screen resolution close to its native resolution of 1366x768. This xorg.conf file was lost along the way - and for a long time we have lived with what the EDID of the screen had to say about it: 1280x1024. For a long time we have changed the aspect ratio on all videos being played on this linux box.
Today no more!
Today no more!
Section "ServerLayout"
Identifier "Layout0"
Screen 0 "Screen0" 0 0
EndSection
Section "Module"
Load "dbe"
Load "extmod"
Load "glx"
EndSection
Section "Monitor"
Identifier "Hisense"
Modeline "1360x768" 85.500 1360 1424 1536 1792 768 771 777 795 +Hsync +Vsync
EndSection
Section "Device"
Identifier "Device0"
Driver "nvidia"
Option "ExactModeTimingsDVI" "true"
EndSection
Section "Screen"
Identifier "Screen0"
Device "Device0"
Monitor "Hisense"
SubSection "Display"
Depth 24
Modes "1360x768"
EndSubSection
EndSection
tirsdag den 27. januar 2009
Symfony and mod_rewrite
I am currently working as an independent developer on two projects using Symfony. This has been a great trip into a quite huge framework - and a great experience finally getting to know mod_rewrite a little better.
The first thing was that the urls in Symfony soon become something like this:
http://hostname/appname.php/modulename/actionname/arg1/arg1value/arg2/arg2value
Which was cool, but not as cool as if I could remove the ".php", hence I came up with:
.htaccess
This worked just fine!
One thing I had serious issues with was non-existing static resources. It boils down to Symfony thinking that every request for http://host/xxx/yyy/zzz is a call to action "zzz" on module "yyy" in application "xxx". This is because of this rule:
.htaccess
In general this is perfectly smart, except when you try something along the lines of:
http://hostname/images/somefolder/mysuperimage.png (which does not exist)
Then symfony tries to execute the action "mysuperimage.png" on module "somefolder" in the application which you created first (typically called "frontend", and represented by "index.php"). This is down right annoying!
But there is an easy fix. All the php-files (two for each application, production/dev) are placed in the "web" folder directly. If all your resources are placed in subdirectories then all you have to do is tell mod_rewrite to leave these folders alone!
.htaccess
In one place I had a lot of missing resources. I did not want to just give a flash-frontend a 404 so here was a quick fix for that:
.htaccess
The first thing was that the urls in Symfony soon become something like this:
http://hostname/appname.php/modulename/actionname/arg1/arg1value/arg2/arg2value
Which was cool, but not as cool as if I could remove the ".php", hence I came up with:
.htaccess
# xxxx.php -> xxxx (if the file does not exist)
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME}.php -f
RewriteRule ^([^\./]+)/(.*) $1.php/$2 [L]
This worked just fine!
One thing I had serious issues with was non-existing static resources. It boils down to Symfony thinking that every request for http://host/xxx/yyy/zzz is a call to action "zzz" on module "yyy" in application "xxx". This is because of this rule:
.htaccess
# no, so we redirect to our front web controller
RewriteRule ^(.*)$ index.php [QSA,L]
In general this is perfectly smart, except when you try something along the lines of:
http://hostname/images/somefolder/mysuperimage.png (which does not exist)
Then symfony tries to execute the action "mysuperimage.png" on module "somefolder" in the application which you created first (typically called "frontend", and represented by "index.php"). This is down right annoying!
But there is an easy fix. All the php-files (two for each application, production/dev) are placed in the "web" folder directly. If all your resources are placed in subdirectories then all you have to do is tell mod_rewrite to leave these folders alone!
.htaccess
# skip real folders
RewriteRule ^backend/.*$ - [PT]
RewriteRule ^css/.*$ - [PT]
RewriteRule ^images/.*$ - [PT]
RewriteRule ^resources/.*$ - [PT]
RewriteRule ^sfPropelPlugin/.*$ - [PT]
RewriteRule ^swf/.*$ - [PT]
In one place I had a lot of missing resources. I did not want to just give a flash-frontend a 404 so here was a quick fix for that:
.htaccess
# Rewrite some resources to .xxx -> 404.xxx
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^resources/(.*).png resources/404.png [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^resources/(.*).jpg resources/404.jpg [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^resources/(.*).flv resources/404.flv [L]
torsdag den 10. april 2008
Tapestry trick for caching autogenerated images
At work I am currently using Tapestry 5.0 to create a portal for the game we are working on "Call of the Kings". This has given rise to a lot of stress, but working with it more and more convinces me that it was a good choice. I should warn though that it is not a flat learning curve - consider it more like a wall that can be climbed if you try very hard. Once climbed it will grant you enormous potential.
The trick I am going to talk tell about it how to make caching for generated images/pages/etc. where you know that the only thing affecting them is the request-parameters (called the context in Tapestry5.0 language, available during "Page Activation").
First of all you must understand that the context for a tapestry page is encoded as "subfolders". Lets say that the context expects 2 integers and a string, then it would look like this:
Now this is VERY nice for bookmarking, it is nice for remembering, it is fun when testing etc. It is however very different from what we are used to. Also it leads to the following question:
What if I had a directory structure like this "pagenameinlowercase/42/32/" and inside the top folder had a file called "thestring" what would Tomcat/Jetty/etc. do with the request?
Jup, it would render the static file in preference of calling your onActivate(42, 32, "thestring") on your page called "PageNameInLowerCase". We will come back to this later.
Now these parameters are all of type string to start with so Tapestry uses something called a "Coercer". This you have to inject using IoC into your page - don't ask that is the wall you will climb. Once you have this little thing going it's easy to convert between types (and using IoC you can even add more type coersions for your own types).
Ok, that was a lot of stuff with no real point - so here goes the actual case:
We have a heraldry generator in our game, it takes some base images and combines them into flags/shields etc. Only one player can have any given combination, but since you choose 4 images and we have a lot of them I guess there are a few billion combinations. Now these images are used a lot on the portal to identify players, but only few of the possible combinations are actually used. Therefor we would like to generate them on the fly - but as the promise stated, this is a case where ONLY the request-parameters (Tapestry: context) have influence on the result.
Now, the shields we generate take 4 integers and a size (since we also want to scale them). So a request to our images generator-page looks like this:
Now the first parameter "shield" is just to tell it that we are going to need a shield (not a flag or something like that). The next 4 are the actual heraldry components and the last one is the size. Now the trick is that in our component we can easily access these parameters and convert them to integers. It looks something like this:
Basically we convert parameter with indexes 1-6 to integers... But what is that replace ".png" thing at the end?
Yes, that is the trick, since if the request had looked like this:
The last parameter would not have been "76" but "76.png". It is very smart that I strip off this part, because now when generating a shield that I have not generated before I can just store it as a plain png-image on the server. Next time someone requests this specific heraldry (and they will since I intend to keep it) Tomcat will serve them the stored image.
Conclusion
Just by taking advantage of a parameter (Tapestry: context) encoding we can implement caching just by placing the result at a location that would correspond to the first request. Then Tomcat/Jetty/etc. will serve the cached version. This could be used for HTML/JS/Jpeg/SWF/etc. but in case the result is something a little more "live" than an image you should watch out that you don't give your visitors an easy injection-point on your web server. But in general this falls under the normal "Watch out what you do with arguments given to you by a user"-paranoia that one must have when dealing with a server-application.
The trick I am going to talk tell about it how to make caching for generated images/pages/etc. where you know that the only thing affecting them is the request-parameters (called the context in Tapestry5.0 language, available during "Page Activation").
First of all you must understand that the context for a tapestry page is encoded as "subfolders". Lets say that the context expects 2 integers and a string, then it would look like this:
http://localhost/yourproject.isnice.com/pagenameinlowercase/42/32/thestring
Now this is VERY nice for bookmarking, it is nice for remembering, it is fun when testing etc. It is however very different from what we are used to. Also it leads to the following question:
What if I had a directory structure like this "pagenameinlowercase/42/32/" and inside the top folder had a file called "thestring" what would Tomcat/Jetty/etc. do with the request?
Jup, it would render the static file in preference of calling your onActivate(42, 32, "thestring") on your page called "PageNameInLowerCase". We will come back to this later.
Now these parameters are all of type string to start with so Tapestry uses something called a "Coercer". This you have to inject using IoC into your page - don't ask that is the wall you will climb. Once you have this little thing going it's easy to convert between types (and using IoC you can even add more type coersions for your own types).
Ok, that was a lot of stuff with no real point - so here goes the actual case:
We have a heraldry generator in our game, it takes some base images and combines them into flags/shields etc. Only one player can have any given combination, but since you choose 4 images and we have a lot of them I guess there are a few billion combinations. Now these images are used a lot on the portal to identify players, but only few of the possible combinations are actually used. Therefor we would like to generate them on the fly - but as the promise stated, this is a case where ONLY the request-parameters (Tapestry: context) have influence on the result.
Now, the shields we generate take 4 integers and a size (since we also want to scale them). So a request to our images generator-page looks like this:
http://portal.cotk.net/showheraldryshield/shield/301050/301000/200782/110881/64/76
Now the first parameter "shield" is just to tell it that we are going to need a shield (not a flag or something like that). The next 4 are the actual heraldry components and the last one is the size. Now the trick is that in our component we can easily access these parameters and convert them to integers. It looks something like this:
@OnEvent(value="activate")
public StreamResponse onActivate(Object[] context)
{
String type = (String)context[0];
if("shield".equals(type))
{
return _heraldryImageProvider.getHeraldryShield(
_typeCoercer.coerce(context[1], Integer.class),
_typeCoercer.coerce(context[2], Integer.class),
_typeCoercer.coerce(context[3], Integer.class),
_typeCoercer.coerce(context[4], Integer.class),
_typeCoercer.coerce(context[5], Integer.class),
_typeCoercer.coerce(((String)context[6]).replace(".png", ""), Integer.class)
);
}
else
{
return null;
}
}
Basically we convert parameter with indexes 1-6 to integers... But what is that replace ".png" thing at the end?
Yes, that is the trick, since if the request had looked like this:
http://portal.cotk.net/showheraldryshield/shield/301050/301000/200782/110881/64/76.png
The last parameter would not have been "76" but "76.png". It is very smart that I strip off this part, because now when generating a shield that I have not generated before I can just store it as a plain png-image on the server. Next time someone requests this specific heraldry (and they will since I intend to keep it) Tomcat will serve them the stored image.
Conclusion
Just by taking advantage of a parameter (Tapestry: context) encoding we can implement caching just by placing the result at a location that would correspond to the first request. Then Tomcat/Jetty/etc. will serve the cached version. This could be used for HTML/JS/Jpeg/SWF/etc. but in case the result is something a little more "live" than an image you should watch out that you don't give your visitors an easy injection-point on your web server. But in general this falls under the normal "Watch out what you do with arguments given to you by a user"-paranoia that one must have when dealing with a server-application.
onsdag den 9. april 2008
Ubuntu: Network Manager will not forget!
For a long time I have had a wired and wireless network shared with the rest of the apartments where I live. This has been all nice - except one of the Wireless APs is just within reach of where I often sit, but far from good enough to be useful. This should not pose a problem, except I told the Network Manager about the AP's ESSID and told it the key to it. Now it insist on switching to this AP every now and then, rendering my connection almost useless.
I have tried to search the internet for a solution but the words "remove", "essid" and "network manager" tend to bring up pages about someone who cannot connect to the internet...
But today, I suddenly found it!!
This is where the magic takes place:
The path seems obvious - retro spectrum.
I have tried to search the internet for a solution but the words "remove", "essid" and "network manager" tend to bring up pages about someone who cannot connect to the internet...
But today, I suddenly found it!!
This is where the magic takes place:
~/.gconf/system/networking/wireless/networks
The path seems obvious - retro spectrum.
torsdag den 20. marts 2008
Multiple Eclipse Setups
This morning I saw that I now have 4 launch icons for Eclipse on my desktop (that is all I have). And I though I might share with you why.
In a number of projects I have found Eclipse to be the best IDE. It has support for many languages, it is easy to work with and actually also quite easy to extend. Now, when I first tried it many years ago, I did not have the biggest computer and since I develop in a lot of different languages it did not take long before my Eclipse installation was so bloated with plug-ins that it could hardly start.
The lesson learned was that Eclipse should be kept slim - with a minimum of plug-ins. This in turns has made it impossible to develop anything than JAVA in it. I therefor ventured out and decided that my current machine could handle at least the PDT extension also. This was correct it could handle it. But the PDT had a different idea about what F11 (launch last launched in JDT) was used for and this ended up making me very angry.
Now, I knew that it was possible to have multiple workspaces (I have that already to keep my Java projects and websites apart). But I did not know that it was also possible to have separate setups with different sets of Plug-ins.
It took me a while to get it working, and I am properbly not doing it the right way, but it works very well. I now have a full functional Java-IDE, PHP-IDE, CPP-IDE and alternative Java-IDE. I rarely have to mix the languages in one project, and even if I did I could just create a new Eclipse setup and install the mixture of plug-ins needed.
So, the "code" (actually it is just a small shell-script but I guess you get the idea):
As you can see, adding the -data makes it easy to start in a preselected workspace. Using -configuration instructs eclipse to use the given directory for settings files. Then all you have to do when installing plug-ins is to place them somewhere "special" for the given setup. The fact that the configuration directory is inside my workspace is just to ease things for me - it could be placed anywhere and be used for several workspaces.
In a number of projects I have found Eclipse to be the best IDE. It has support for many languages, it is easy to work with and actually also quite easy to extend. Now, when I first tried it many years ago, I did not have the biggest computer and since I develop in a lot of different languages it did not take long before my Eclipse installation was so bloated with plug-ins that it could hardly start.
The lesson learned was that Eclipse should be kept slim - with a minimum of plug-ins. This in turns has made it impossible to develop anything than JAVA in it. I therefor ventured out and decided that my current machine could handle at least the PDT extension also. This was correct it could handle it. But the PDT had a different idea about what F11 (launch last launched in JDT) was used for and this ended up making me very angry.
Now, I knew that it was possible to have multiple workspaces (I have that already to keep my Java projects and websites apart). But I did not know that it was also possible to have separate setups with different sets of Plug-ins.
It took me a while to get it working, and I am properbly not doing it the right way, but it works very well. I now have a full functional Java-IDE, PHP-IDE, CPP-IDE and alternative Java-IDE. I rarely have to mix the languages in one project, and even if I did I could just create a new Eclipse setup and install the mixture of plug-ins needed.
So, the "code" (actually it is just a small shell-script but I guess you get the idea):
/opt/eclipse/eclipse \
-Xmx1024M \
-data /home/emanuel/sources/java/workspace/ \
-configuration "file:///home/emanuel/sources/java/workspace/Eclipse_Configuration"
As you can see, adding the -data makes it easy to start in a preselected workspace. Using -configuration instructs eclipse to use the given directory for settings files. Then all you have to do when installing plug-ins is to place them somewhere "special" for the given setup. The fact that the configuration directory is inside my workspace is just to ease things for me - it could be placed anywhere and be used for several workspaces.
onsdag den 13. februar 2008
Iterable-mutable-hashtable
For a long time I have been trying to find a data structure for a set of problems that we kept running into at work. What we wanted was a container that was both fast for lookups (hashtable) good for iteration (linked list) - and with the special feature that you could add and remove from the list while iterating it.
The two first are true for most implementations of the java interface Map. But the last requirement is not met in these standard implementations - as they all throw an ConcurrentModificationException. The reason we wanted the ability to concurrently modify the container was that a lot of iterating over children of objects resulted in the children self-destructing, hence removing them-selves from the container we were currently iterating. Now, one could argue that we could just use the iterators "remove()" method to do this, but since it was not for the parent to decide child would have to somehow know about the current iterator... I guess you see the code cluttering up down that train of thoughts.
So, after many ideas that seemed to be working on the paper, but have some flaw once implemented I came up with the LinkedList-Hashtable-mix that this entry is all about.
Basic idea
The idea is really simple: create a class where insertions where made both to a linked list (for iterations) and a hashtable (for lookups). This did not work in practice since the LinkedList did not like concurrent modifications either - but the idea worked, so I just had to create my own implementation of a linked list.
Implementation
Actually implementing the thing took a few hours once the idea was in place. Mainly because the container had to handle serialization, and serialization of a linked list cannot be done in the standard serialization way (due to stack overflow).
Code
The two first are true for most implementations of the java interface Map. But the last requirement is not met in these standard implementations - as they all throw an ConcurrentModificationException. The reason we wanted the ability to concurrently modify the container was that a lot of iterating over children of objects resulted in the children self-destructing, hence removing them-selves from the container we were currently iterating. Now, one could argue that we could just use the iterators "remove()" method to do this, but since it was not for the parent to decide child would have to somehow know about the current iterator... I guess you see the code cluttering up down that train of thoughts.
So, after many ideas that seemed to be working on the paper, but have some flaw once implemented I came up with the LinkedList-Hashtable-mix that this entry is all about.
Basic idea
The idea is really simple: create a class where insertions where made both to a linked list (for iterations) and a hashtable (for lookups). This did not work in practice since the LinkedList did not like concurrent modifications either - but the idea worked, so I just had to create my own implementation of a linked list.
Implementation
Actually implementing the thing took a few hours once the idea was in place. Mainly because the container had to handle serialization, and serialization of a linked list cannot be done in the standard serialization way (due to stack overflow).
Code
import java.io.IOException;
import java.io.Serializable;
import java.util.Collection;
import java.util.Enumeration;
import java.util.Hashtable;
import java.util.Iterator;
import java.util.LinkedHashMap;
import java.util.Set;
import java.util.Vector;
import java.util.Map.Entry;
/**
* The following is much like the {@link LinkedHashMap} except it allows concurrent
* modifications.
*
* The order is not guarantied after serialization.
*
* @author Emanuel Greisen
*/
public class ThreadSafeHashtable<K,T> implements Iterable<T>,Serializable
{
private static final long serialVersionUID = 2131757156391825454L;
protected Hashtable<K,ThreadSafeHashitem<T>> lookup_table =
new Hashtable<K,ThreadSafeHashitem<T>>();
transient ThreadSafeHashitem<T> first_item;
transient ThreadSafeHashitem<T> last_item;
private void writeObject(java.io.ObjectOutputStream out)
throws IOException
{
out.defaultWriteObject();
}
private void readObject(java.io.ObjectInputStream in)
throws IOException, ClassNotFoundException
{
in.defaultReadObject();
// OPTIMIZE: postpone this untill we need it. Loads of unserializations
// on the server does not use all tables, hence
// the relinking everying for iteration is not nessesary.
if(lookup_table != null)
{
for(Entry<K,ThreadSafeHashitem<T>> item_entry : lookup_table.entrySet())
{
if(first_item == null)
{
first_item = last_item = item_entry.getValue();
}
else
{
last_item.next = item_entry.getValue();
item_entry.getValue().prev = last_item;
last_item = item_entry.getValue();
}
}
}
}
public Iterator<T> iterator()
{
return new ThreadSafeHashtableIterator<K,T>(first_item);
}
/**
* Put an object in the hashmap, note that if any iterators are open the addition
* will not be visible untill these iterators have closed
* or the object has been serialized/deserialized.
* @param key
* @param value
* @return
*/
public T putObject(final K key, final T value)
{
if(key == null)
throw new IllegalArgumentException("'key' may not be null.");
ThreadSafeHashitem<T> old = lookup_table.get(key);
if(old != null)
{
T old_val = old.value;
old.value = value;
return old_val;
}
ThreadSafeHashitem<T> new_item = new ThreadSafeHashitem<T>(value);
if(last_item != null)
{
last_item.next = new_item;
new_item.prev = last_item;
last_item = new_item;
}
else
{
first_item = new_item;
last_item = new_item;
}
lookup_table.put(key, new_item);
return null;
}
/**
* Remove all objects in the hashtable.
*/
public void clear()
{
// Nuke all "next"-refs
ThreadSafeHashitem<T> item = first_item;
while(item != null)
{
ThreadSafeHashitem<T> next_item = item.next;
item.next = null;
item.prev = null;
item = next_item;
}
lookup_table.clear();
}
/**
* Return true if the hashtable contains a certain key.
* @param key
* @return
*/
public boolean containsKey(K key)
{
return lookup_table.containsKey(key);
}
/**
* Get the object that is assosiated with the given key.
* @param key
* @return
*/
public T getObject(K key)
{
ThreadSafeHashitem<T> item = lookup_table.get(key);
if(item != null)
{
return item.value;
}
return null;
}
/**
* The number of objects in the hashtable.
* @return
*/
public int size()
{
return lookup_table.size();
}
public Collection<T> values()
{
Vector<T> vals = new Vector<T>();
for(Entry<K, ThreadSafeHashitem<T>> entry : lookup_table.entrySet())
{
vals.add(entry.getValue().value);
}
return vals;
}
public Enumeration<K> keys()
{
return lookup_table.keys();
}
public T removeObject(K key)
{
if(key == null)
throw new IllegalArgumentException("'key' may not be null.");
ThreadSafeHashitem<T> item = lookup_table.remove(key);
if(item != null)
{
// Link the previous to the next
if(item.prev != null)
{
item.prev.next = item.next;
}
else
{
// we were the first (update first_item)
first_item = item.next;
}
// Link the next to the prev
if(item.next != null)
{
item.next.prev = item.prev;
}
else
{
// We were the last, (update last_item)
last_item = item.prev;
}
item.removed = true;
return item.value;
}
return null;
}
/**
* Note that this method is NOT safe to modify in.
* TODO: make a copy.
* @return
*/
public Set<K> keySet()
{
return lookup_table.keySet();
}
public void addAll(ThreadSafeHashtable<K, T> table)
{
for(K k : table.keySet())
{
putObject(k, table.getObject(k));
}
}
}
class ThreadSafeHashitem<T> implements Serializable
{
private static final long serialVersionUID = 1906364262566366088L;
T value;
transient ThreadSafeHashitem<T> prev;
transient ThreadSafeHashitem<T> next;
transient boolean removed;
public ThreadSafeHashitem(T value)
{
this.value = value;
}
}
class ThreadSafeHashtableIterator<K,T> implements Iterator<T>
{
ThreadSafeHashitem<T> item;
public ThreadSafeHashtableIterator(ThreadSafeHashitem<T> item)
{
this.item = item;
}
public boolean hasNext()
{
// Skip all the removed items.
while(item != null && item.removed)
{
item = item.next;
}
return item != null;
}
public T next()
{
T val = item.value;
item = item.next;
return val;
}
public void remove()
{
//NOT SUPPORTED: iterator.remove();
}
}
Abonner på:
Opslag (Atom)