import net.openhft.chronicle.map.ChronicleMap;
import java.io.File;
import java.io.Serializable;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
public class App {
public static void main(String[] args) throws Exception {
Map<Point, Point> map1 = new ConcurrentHashMap<>();
ChronicleMap<Point, Point> map2 = ChronicleMap
.of(Point.class, Point.class)
.name("map")
.averageKey(new Point(10, 10))
.averageValue(new Point(10, 10))
.entries(50)
.createPersistedTo(new File("c:/temp/map/param.dat"));
Point key = new Point(12, 12);
key.hashCode();
map1.put(key, key);
map2.put(key, key);
System.out.println("ConcurrentHashMap.get returned " + map1.get(new Point(12, 12)));
System.out.println("ChronicleMap.get returned " + map2.get(new Point(12, 12)));
}
}
class Point implements Serializable {
private int x = 0;
private int y = 0;
private int _hash = 0;
public Point(int x, int y) {
this.x = x;
this.y = y;
}
public int getX() {
return x;
}
public int getY() {
return y;
}
@Override
public String toString() {
return super.toString() + " {" + x + "," + y + "}";
}
@Override
public int hashCode() {
_hash = 1;
_hash = _hash * 17 + x;
_hash = _hash * 31 + y;
return _hash;
}
@Override
public boolean equals(Object obj) {
if(obj instanceof Point) {
return (x == ((Point) obj).getX()) && (y == ((Point) obj).getY());
}
return false;
}
}
您可以在上面的示例中看到,慢性读词行为与ContrenthashMap(与HashMap相同)有些不同,因为它无法使用hashcode或equals查找键。
有人可以确定如何解决此问题?
更新;执行后,该程序将返回以下结果:
ConcurrentHashMap.get returned App.Point@38f {12,12}
ChronicleMap.get returned null
慢性序列序列化键,并取一个字节的64位哈希。
使用64位哈希,因为该地图是为大量键而设计的。数十亿,而当您拥有数百万钥匙时,32位哈希的碰撞率往往会有很高的碰撞率。
它还可以使用更高级的哈希策略,例如https://github.com/openhft/zero-alocation-hashing
注意:使用序列化是执行此操作的最低效率的方法,但对于此示例而言,使用效率最低。