简介:
都说理论是学习的基础,但是没有实际操作的理论都是别人说的,可信与否只有自己亲手实验了才知道,毕竟眼见为实。以前学习hashmap的时候都是网上各种看资料,讲解的都是一大堆的理论,但是很少有能实际操作让大家可以看到实际展示效果的,而且没有实现展示效果但凭一篇几千字上万字的文章没有自己动手操作想要理解透彻hashmap还是比较困难的,而用debug模式又会有很多的不必要流程会加在进去,需要提取到有效信息时需要跳过很多没必要关注的流程,所以今天来带大家看看怎么使用打印日志来分析hashmap源码,哪里不明白就在哪里打日志就完了。
为什么放弃debug而使用源码
原因
因为在使用源码的时候 调用hashMap构造函数 HashMap(int initialCapacity) 内部加入相应的断点,无论你的参数初始大小是多少,那么他最终构造该方法里第一次展示的都是11,在这个断电里会停留很多次才会执行到我们创建的方法,使用不是很方便
这是为什么呢,经过仔细分析后最终找到了调用这个方法的源头,因为hashmap是一容器,不只是你在用,很多框架包括jdk内部自身也有很多地方在使用,而jvm的类加载机制会优先加载jdk内部包的内容,所以调用这个方法也就不足为奇了,初始化为11的值方法在java.util.jar.Attributes类中调用的,见下面代码。
public Attributes() {
this(11);
}
public Attributes(int size
) {
map
= new HashMap<>(size
);
}
解决方式
那么这个问题就无法解决了吗,肯定有的,因为博主使用的是IDEA IDEA的整个流程在断点处单击右键 codition栏中添加 initialCapacity==设置的容量,那么就会在满足当前条件的时候才会停下来了,如图 不过这种方法只是报提出了部分没用的值,但是并不能保证取到的就一定是自己调用的,因为还是有可能会有其他方法也调用了相同的值。
关于断点与源码的推荐
断点适合查看自定义代码走向一步步查看流程,对于使用比较复杂的通用工具类是不推荐使用的,因为我们这里要做的是研究hashmap内部的部分方法,对于这种的需求采用控制变量法是一个不错的方法,我们只关注我们的代码执行流程,那么复制一个自定义类是比较好的选择,无论是用日志还是断点自定义的源码操作都比较方便。
探讨问题:
关于hashmap的扩容决定因素时是取决于key的数量还是数组的占用数量,因为网上很多文章都是说的是取决于数组占用长度,也有文章说是当桶的占用数量小于64就只会扩容,大于64后就会取决于桶的占用长度,但是在查看过源码后总觉得代码逻辑不是这么实现的,关于链表上的元素新增元素是添加在链表的头部还是尾部这个问题的说法也不一致,查看源码后觉得是增加在尾部的,所以今天来用源码实际查看运行的效果。
先告诉大家结论,可以直接看最后的测试代码和运行结果:
1.hashmap中的数组扩容决定因子是key的数量而不是数组占用的数量。
2.map链表中新增元素的内容是加在链表尾部的,jdk1.7的据说时头部,没有测试过,博主这里使用的时jdk1.8。
源码提供:
MyHashMap
package com
.map
;
import java
.io
.IOException
;
import java
.io
.InvalidObjectException
;
import java
.io
.Serializable
;
import java
.lang
.reflect
.ParameterizedType
;
import java
.lang
.reflect
.Type
;
import java
.util
.*
;
import java
.util
.function
.BiConsumer
;
import java
.util
.function
.BiFunction
;
import java
.util
.function
.Consumer
;
import java
.util
.function
.Function
;
@SuppressWarnings("all")
public class MyHashMap<K, V> extends AbstractMap<K, V>
implements Map<K, V>, Cloneable
, Serializable
{
private static final long serialVersionUID
= 362498820763181265L
;
transient Set
<K> keySet
;
transient Collection
<V> values
;
static final int DEFAULT_INITIAL_CAPACITY
= 1 << 4;
static final int MAXIMUM_CAPACITY
= 1 << 30;
static final float DEFAULT_LOAD_FACTOR
= 0.75f;
static final int TREEIFY_THRESHOLD
= 8;
static final int UNTREEIFY_THRESHOLD
= 6;
static final int MIN_TREEIFY_CAPACITY
= 64;
static class Node<K, V> implements Entry<K, V> {
final int hash
;
final K key
;
V value
;
Node
<K, V> next
;
Node(int hash
, K key
, V value
, Node
<K, V> next
) {
this.hash
= hash
;
this.key
= key
;
this.value
= value
;
this.next
= next
;
}
public final K
getKey() {
return key
;
}
public final V
getValue() {
return value
;
}
public final String
toString() {
return key
+ "=" + value
;
}
public final int hashCode() {
return Objects
.hashCode(key
) ^ Objects
.hashCode(value
);
}
public final V
setValue(V newValue
) {
V oldValue
= value
;
value
= newValue
;
return oldValue
;
}
public final boolean equals(Object o
) {
if (o
== this)
return true;
if (o
instanceof Map.Entry) {
Entry
<?, ?> e
= (Entry
<?, ?>) o
;
if (Objects
.equals(key
, e
.getKey()) &&
Objects
.equals(value
, e
.getValue()))
return true;
}
return false;
}
}
static final int hash(Object key
) {
int h
;
return (key
== null
) ? 0 : (h
= key
.hashCode()) ^ (h
>>> 16);
}
static Class
<?> comparableClassFor(Object x
) {
if (x
instanceof Comparable) {
Class
<?> c
;
Type
[] ts
, as
;
Type t
;
ParameterizedType p
;
if ((c
= x
.getClass()) == String
.class)
return c
;
if ((ts
= c
.getGenericInterfaces()) != null
) {
for (int i
= 0; i
< ts
.length
; ++i
) {
if (((t
= ts
[i
]) instanceof ParameterizedType) &&
((p
= (ParameterizedType
) t
).getRawType() ==
Comparable
.class) &&
(as
= p
.getActualTypeArguments()) != null
&&
as
.length
== 1 && as
[0] == c
)
return c
;
}
}
}
return null
;
}
@SuppressWarnings({"rawtypes", "unchecked"})
static int compareComparables(Class
<?> kc
, Object k
, Object x
) {
return (x
== null
|| x
.getClass() != kc
? 0 :
((Comparable
) k
).compareTo(x
));
}
static final int tableSizeFor(int cap
) {
int n
= cap
- 1;
n
|= n
>>> 1;
n
|= n
>>> 2;
n
|= n
>>> 4;
n
|= n
>>> 8;
n
|= n
>>> 16;
return (n
< 0) ? 1 : (n
>= MAXIMUM_CAPACITY
) ? MAXIMUM_CAPACITY
: n
+ 1;
}
transient Node
<K, V>[] table
;
transient Set
<Entry
<K, V>> entrySet
;
transient int size
;
transient int modCount
;
int threshold
;
final float loadFactor
;
public MyHashMap(int initialCapacity
, float loadFactor
) {
if (initialCapacity
< 0)
throw new IllegalArgumentException("Illegal initial capacity: " +
initialCapacity
);
if (initialCapacity
> MAXIMUM_CAPACITY
)
initialCapacity
= MAXIMUM_CAPACITY
;
if (loadFactor
<= 0 || Float
.isNaN(loadFactor
))
throw new IllegalArgumentException("Illegal load factor: " +
loadFactor
);
this.loadFactor
= loadFactor
;
this.threshold
= tableSizeFor(initialCapacity
);
}
public MyHashMap(int initialCapacity
) {
this(initialCapacity
, DEFAULT_LOAD_FACTOR
);
}
public MyHashMap() {
this.loadFactor
= DEFAULT_LOAD_FACTOR
;
}
public MyHashMap(Map
<? extends K, ? extends V> m
) {
this.loadFactor
= DEFAULT_LOAD_FACTOR
;
putMapEntries(m
, false);
}
final void putMapEntries(Map
<? extends K, ? extends V> m
, boolean evict
) {
int s
= m
.size();
if (s
> 0) {
if (table
== null
) {
float ft
= ((float) s
/ loadFactor
) + 1.0F;
int t
= ((ft
< (float) MAXIMUM_CAPACITY
) ?
(int) ft
: MAXIMUM_CAPACITY
);
if (t
> threshold
)
threshold
= tableSizeFor(t
);
} else if (s
> threshold
)
resize();
for (Entry
<? extends K, ? extends V> e
: m
.entrySet()) {
K key
= e
.getKey();
V value
= e
.getValue();
putVal(hash(key
), key
, value
, false, evict
);
}
}
}
public int size() {
return size
;
}
public boolean isEmpty() {
return size
== 0;
}
public V
get(Object key
) {
Node
<K, V> e
;
return (e
= getNode(hash(key
), key
)) == null
? null
: e
.value
;
}
final MyHashMap
.Node
<K, V> getNode(int hash
, Object key
) {
MyHashMap
.Node
<K, V>[] tab
;
MyHashMap
.Node
<K, V> first
, e
;
int n
;
K k
;
if ((tab
= table
) != null
&& (n
= tab
.length
) > 0 && (first
= tab
[(n
- 1) & hash
]) != null
) {
if (first
.hash
== hash
&&
((k
= first
.key
) == key
|| (key
!= null
&& key
.equals(k
))))
return first
;
if ((e
= first
.next
) != null
) {
if (first
instanceof TreeNode)
return ((TreeNode
<K, V>) first
).getTreeNode(hash
, key
);
do {
if (e
.hash
== hash
&&
((k
= e
.key
) == key
|| (key
!= null
&& key
.equals(k
))))
return e
;
} while ((e
= e
.next
) != null
);
}
}
return null
;
}
public boolean containsKey(Object key
) {
return getNode(hash(key
), key
) != null
;
}
public V
put(K key
, V value
) {
return putVal(hash(key
), key
, value
, false, true);
}
final V
putVal(int hash
, K key
, V value
, boolean onlyIfAbsent
, boolean evict
) {
Node
<K, V>[] tab
;
Node
<K, V> p
;
int n
, i
;
if ((tab
= table
) == null
|| (n
= tab
.length
) == 0) {
n
= (tab
= resize()).length
;
}
int i1
= n
- 1;
i
= i1
& hash
;
if ((p
= tab
[i
]) == null
) {
tab
[i
] = newNode(hash
, key
, value
, null
);
} else {
MyHashMap
.Node
<K, V> e
;
K k
;
if (p
.hash
== hash
&& ((k
= p
.key
) == key
|| (key
!= null
&& key
.equals(k
))))
e
= p
;
else if (p
instanceof MyHashMap.TreeNode)
e
= ((MyHashMap
.TreeNode
<K, V>) p
).putTreeVal(this, tab
, hash
, key
, value
);
else {
for (int binCount
= 0; ; ++binCount
) {
if ((e
= p
.next
) == null
) {
p
.next
= newNode(hash
, key
, value
, null
);
if (binCount
>= TREEIFY_THRESHOLD
- 1)
treeifyBin(tab
, hash
);
break;
}
if (e
.hash
== hash
&& ((k
= e
.key
) == key
|| (key
!= null
&& key
.equals(k
))))
break;
p
= e
;
}
}
if (e
!= null
) {
V oldValue
= e
.value
;
if (!onlyIfAbsent
|| oldValue
== null
)
e
.value
= value
;
afterNodeAccess(e
);
return oldValue
;
}
}
++modCount
;
if (++size
> threshold
) {
resize();
System
.out
.println(key
+ "扩容");
}
afterNodeInsertion(evict
);
return null
;
}
final Node
<K, V>[] resize() {
Node
<K, V>[] oldTab
= table
;
int oldCap
= (oldTab
== null
) ? 0 : oldTab
.length
;
int oldThr
= threshold
;
int newCap
, newThr
= 0;
if (oldCap
> 0) {
if (oldCap
>= MAXIMUM_CAPACITY
) {
threshold
= Integer
.MAX_VALUE
;
return oldTab
;
} else if ((newCap
= oldCap
<< 1) < MAXIMUM_CAPACITY
&&
oldCap
>= DEFAULT_INITIAL_CAPACITY
)
newThr
= oldThr
<< 1;
} else if (oldThr
> 0)
newCap
= oldThr
;
else {
newCap
= DEFAULT_INITIAL_CAPACITY
;
newThr
= (int) (DEFAULT_LOAD_FACTOR
* DEFAULT_INITIAL_CAPACITY
);
}
if (newThr
== 0) {
float ft
= (float) newCap
* loadFactor
;
newThr
= (newCap
< MAXIMUM_CAPACITY
&& ft
< (float) MAXIMUM_CAPACITY
?
(int) ft
: Integer
.MAX_VALUE
);
}
threshold
= newThr
;
@SuppressWarnings({"rawtypes", "unchecked"})
Node
<K, V>[] newTab
= (Node
<K, V>[]) new Node[newCap
];
table
= newTab
;
if (oldTab
!= null
) {
for (int j
= 0; j
< oldCap
; ++j
) {
Node
<K, V> e
;
if ((e
= oldTab
[j
]) != null
) {
oldTab
[j
] = null
;
if (e
.next
== null
)
newTab
[e
.hash
& (newCap
- 1)] = e
;
else if (e
instanceof TreeNode)
((TreeNode
<K, V>) e
).split(this, newTab
, j
, oldCap
);
else {
Node
<K, V> loHead
= null
, loTail
= null
;
Node
<K, V> hiHead
= null
, hiTail
= null
;
Node
<K, V> next
;
do {
next
= e
.next
;
if ((e
.hash
& oldCap
) == 0) {
if (loTail
== null
)
loHead
= e
;
else
loTail
.next
= e
;
loTail
= e
;
} else {
if (hiTail
== null
)
hiHead
= e
;
else
hiTail
.next
= e
;
hiTail
= e
;
}
} while ((e
= next
) != null
);
if (loTail
!= null
) {
loTail
.next
= null
;
newTab
[j
] = loHead
;
}
if (hiTail
!= null
) {
hiTail
.next
= null
;
newTab
[j
+ oldCap
] = hiHead
;
}
}
}
}
}
return newTab
;
}
final void treeifyBin(Node
<K, V>[] tab
, int hash
) {
int n
, index
;
Node
<K, V> e
;
if (tab
== null
|| (n
= tab
.length
) < MIN_TREEIFY_CAPACITY
)
resize();
else if ((e
= tab
[index
= (n
- 1) & hash
]) != null
) {
TreeNode
<K, V> hd
= null
, tl
= null
;
do {
TreeNode
<K, V> p
= replacementTreeNode(e
, null
);
if (tl
== null
)
hd
= p
;
else {
p
.prev
= tl
;
tl
.next
= p
;
}
tl
= p
;
} while ((e
= e
.next
) != null
);
if ((tab
[index
] = hd
) != null
)
hd
.treeify(tab
);
}
}
public void putAll(Map
<? extends K, ? extends V> m
) {
putMapEntries(m
, true);
}
public V
remove(Object key
) {
Node
<K, V> e
;
return (e
= removeNode(hash(key
), key
, null
, false, true)) == null
?
null
: e
.value
;
}
final Node
<K, V> removeNode(int hash
, Object key
, Object value
,
boolean matchValue
, boolean movable
) {
Node
<K, V>[] tab
;
Node
<K, V> p
;
int n
, index
;
if ((tab
= table
) != null
&& (n
= tab
.length
) > 0 &&
(p
= tab
[index
= (n
- 1) & hash
]) != null
) {
Node
<K, V> node
= null
, e
;
K k
;
V v
;
if (p
.hash
== hash
&&
((k
= p
.key
) == key
|| (key
!= null
&& key
.equals(k
))))
node
= p
;
else if ((e
= p
.next
) != null
) {
if (p
instanceof TreeNode)
node
= ((TreeNode
<K, V>) p
).getTreeNode(hash
, key
);
else {
do {
if (e
.hash
== hash
&&
((k
= e
.key
) == key
||
(key
!= null
&& key
.equals(k
)))) {
node
= e
;
break;
}
p
= e
;
} while ((e
= e
.next
) != null
);
}
}
if (node
!= null
&& (!matchValue
|| (v
= node
.value
) == value
||
(value
!= null
&& value
.equals(v
)))) {
if (node
instanceof TreeNode)
((TreeNode
<K, V>) node
).removeTreeNode(this, tab
, movable
);
else if (node
== p
)
tab
[index
] = node
.next
;
else
p
.next
= node
.next
;
++modCount
;
--size
;
afterNodeRemoval(node
);
return node
;
}
}
return null
;
}
public void clear() {
Node
<K, V>[] tab
;
modCount
++;
if ((tab
= table
) != null
&& size
> 0) {
size
= 0;
for (int i
= 0; i
< tab
.length
; ++i
)
tab
[i
] = null
;
}
}
public boolean containsValue(Object value
) {
Node
<K, V>[] tab
;
V v
;
if ((tab
= table
) != null
&& size
> 0) {
for (int i
= 0; i
< tab
.length
; ++i
) {
for (Node
<K, V> e
= tab
[i
]; e
!= null
; e
= e
.next
) {
if ((v
= e
.value
) == value
||
(value
!= null
&& value
.equals(v
)))
return true;
}
}
}
return false;
}
public Set
<K> keySet() {
Set
<K> ks
= keySet
;
if (ks
== null
) {
ks
= new KeySet();
keySet
= ks
;
}
return ks
;
}
final class KeySet extends AbstractSet<K> {
public final int size() {
return size
;
}
public final void clear() {
MyHashMap
.this.clear();
}
public final Iterator
<K> iterator() {
return new KeyIterator();
}
public final boolean contains(Object o
) {
return containsKey(o
);
}
public final boolean remove(Object key
) {
return removeNode(hash(key
), key
, null
, false, true) != null
;
}
public final Spliterator
<K> spliterator() {
return new KeySpliterator<>(MyHashMap
.this, 0, -1, 0, 0);
}
public final void forEach(Consumer
<? super K
> action
) {
Node
<K, V>[] tab
;
if (action
== null
)
throw new NullPointerException();
if (size
> 0 && (tab
= table
) != null
) {
int mc
= modCount
;
for (int i
= 0; i
< tab
.length
; ++i
) {
for (Node
<K, V> e
= tab
[i
]; e
!= null
; e
= e
.next
)
action
.accept(e
.key
);
}
if (modCount
!= mc
)
throw new ConcurrentModificationException();
}
}
}
public Collection
<V> values() {
Collection
<V> vs
= values
;
if (vs
== null
) {
vs
= new Values();
values
= vs
;
}
return vs
;
}
final class Values extends AbstractCollection<V> {
public final int size() {
return size
;
}
public final void clear() {
MyHashMap
.this.clear();
}
public final Iterator
<V> iterator() {
return new ValueIterator();
}
public final boolean contains(Object o
) {
return containsValue(o
);
}
public final Spliterator
<V> spliterator() {
return new ValueSpliterator<>(MyHashMap
.this, 0, -1, 0, 0);
}
public final void forEach(Consumer
<? super V
> action
) {
Node
<K, V>[] tab
;
if (action
== null
)
throw new NullPointerException();
if (size
> 0 && (tab
= table
) != null
) {
int mc
= modCount
;
for (int i
= 0; i
< tab
.length
; ++i
) {
for (Node
<K, V> e
= tab
[i
]; e
!= null
; e
= e
.next
)
action
.accept(e
.value
);
}
if (modCount
!= mc
)
throw new ConcurrentModificationException();
}
}
}
public Set
<Entry
<K, V>> entrySet() {
Set
<Entry
<K, V>> es
;
return (es
= entrySet
) == null
? (entrySet
= new EntrySet()) : es
;
}
final class EntrySet extends AbstractSet<Entry
<K, V>> {
public final int size() {
return size
;
}
public final void clear() {
MyHashMap
.this.clear();
}
public final Iterator
<Entry
<K, V>> iterator() {
return new EntryIterator();
}
public final boolean contains(Object o
) {
if (!(o
instanceof Map.Entry))
return false;
Entry
<?, ?> e
= (Entry
<?, ?>) o
;
Object key
= e
.getKey();
Node
<K, V> candidate
= getNode(hash(key
), key
);
return candidate
!= null
&& candidate
.equals(e
);
}
public final boolean remove(Object o
) {
if (o
instanceof Map.Entry) {
Entry
<?, ?> e
= (Entry
<?, ?>) o
;
Object key
= e
.getKey();
Object value
= e
.getValue();
return removeNode(hash(key
), key
, value
, true, true) != null
;
}
return false;
}
public final Spliterator
<Entry
<K, V>> spliterator() {
return new EntrySpliterator<>(MyHashMap
.this, 0, -1, 0, 0);
}
public final void forEach(Consumer
<? super Entry
<K, V>> action
) {
Node
<K, V>[] tab
;
if (action
== null
)
throw new NullPointerException();
if (size
> 0 && (tab
= table
) != null
) {
int mc
= modCount
;
for (int i
= 0; i
< tab
.length
; ++i
) {
for (Node
<K, V> e
= tab
[i
]; e
!= null
; e
= e
.next
)
action
.accept(e
);
}
if (modCount
!= mc
)
throw new ConcurrentModificationException();
}
}
}
@Override
public V
getOrDefault(Object key
, V defaultValue
) {
Node
<K, V> e
;
return (e
= getNode(hash(key
), key
)) == null
? defaultValue
: e
.value
;
}
@Override
public V
putIfAbsent(K key
, V value
) {
return putVal(hash(key
), key
, value
, true, true);
}
@Override
public boolean remove(Object key
, Object value
) {
return removeNode(hash(key
), key
, value
, true, true) != null
;
}
@Override
public boolean replace(K key
, V oldValue
, V newValue
) {
Node
<K, V> e
;
V v
;
if ((e
= getNode(hash(key
), key
)) != null
&&
((v
= e
.value
) == oldValue
|| (v
!= null
&& v
.equals(oldValue
)))) {
e
.value
= newValue
;
afterNodeAccess(e
);
return true;
}
return false;
}
@Override
public V
replace(K key
, V value
) {
Node
<K, V> e
;
if ((e
= getNode(hash(key
), key
)) != null
) {
V oldValue
= e
.value
;
e
.value
= value
;
afterNodeAccess(e
);
return oldValue
;
}
return null
;
}
@Override
public V
computeIfAbsent(K key
,
Function
<? super K
, ? extends V> mappingFunction
) {
if (mappingFunction
== null
)
throw new NullPointerException();
int hash
= hash(key
);
Node
<K, V>[] tab
;
Node
<K, V> first
;
int n
, i
;
int binCount
= 0;
TreeNode
<K, V> t
= null
;
Node
<K, V> old
= null
;
if (size
> threshold
|| (tab
= table
) == null
||
(n
= tab
.length
) == 0)
n
= (tab
= resize()).length
;
if ((first
= tab
[i
= (n
- 1) & hash
]) != null
) {
if (first
instanceof TreeNode)
old
= (t
= (TreeNode
<K, V>) first
).getTreeNode(hash
, key
);
else {
Node
<K, V> e
= first
;
K k
;
do {
if (e
.hash
== hash
&&
((k
= e
.key
) == key
|| (key
!= null
&& key
.equals(k
)))) {
old
= e
;
break;
}
++binCount
;
} while ((e
= e
.next
) != null
);
}
V oldValue
;
if (old
!= null
&& (oldValue
= old
.value
) != null
) {
afterNodeAccess(old
);
return oldValue
;
}
}
V v
= mappingFunction
.apply(key
);
if (v
== null
) {
return null
;
} else if (old
!= null
) {
old
.value
= v
;
afterNodeAccess(old
);
return v
;
} else if (t
!= null
)
t
.putTreeVal(this, tab
, hash
, key
, v
);
else {
tab
[i
] = newNode(hash
, key
, v
, first
);
if (binCount
>= TREEIFY_THRESHOLD
- 1)
treeifyBin(tab
, hash
);
}
++modCount
;
++size
;
afterNodeInsertion(true);
return v
;
}
public V
computeIfPresent(K key
,
BiFunction
<? super K
, ? super V
, ? extends V> remappingFunction
) {
if (remappingFunction
== null
)
throw new NullPointerException();
Node
<K, V> e
;
V oldValue
;
int hash
= hash(key
);
if ((e
= getNode(hash
, key
)) != null
&&
(oldValue
= e
.value
) != null
) {
V v
= remappingFunction
.apply(key
, oldValue
);
if (v
!= null
) {
e
.value
= v
;
afterNodeAccess(e
);
return v
;
} else
removeNode(hash
, key
, null
, false, true);
}
return null
;
}
@Override
public V
compute(K key
,
BiFunction
<? super K
, ? super V
, ? extends V> remappingFunction
) {
if (remappingFunction
== null
)
throw new NullPointerException();
int hash
= hash(key
);
Node
<K, V>[] tab
;
Node
<K, V> first
;
int n
, i
;
int binCount
= 0;
TreeNode
<K, V> t
= null
;
Node
<K, V> old
= null
;
if (size
> threshold
|| (tab
= table
) == null
||
(n
= tab
.length
) == 0)
n
= (tab
= resize()).length
;
if ((first
= tab
[i
= (n
- 1) & hash
]) != null
) {
if (first
instanceof TreeNode)
old
= (t
= (TreeNode
<K, V>) first
).getTreeNode(hash
, key
);
else {
Node
<K, V> e
= first
;
K k
;
do {
if (e
.hash
== hash
&&
((k
= e
.key
) == key
|| (key
!= null
&& key
.equals(k
)))) {
old
= e
;
break;
}
++binCount
;
} while ((e
= e
.next
) != null
);
}
}
V oldValue
= (old
== null
) ? null
: old
.value
;
V v
= remappingFunction
.apply(key
, oldValue
);
if (old
!= null
) {
if (v
!= null
) {
old
.value
= v
;
afterNodeAccess(old
);
} else
removeNode(hash
, key
, null
, false, true);
} else if (v
!= null
) {
if (t
!= null
)
t
.putTreeVal(this, tab
, hash
, key
, v
);
else {
tab
[i
] = newNode(hash
, key
, v
, first
);
if (binCount
>= TREEIFY_THRESHOLD
- 1)
treeifyBin(tab
, hash
);
}
++modCount
;
++size
;
afterNodeInsertion(true);
}
return v
;
}
@Override
public V
merge(K key
, V value
,
BiFunction
<? super V
, ? super V
, ? extends V> remappingFunction
) {
if (value
== null
)
throw new NullPointerException();
if (remappingFunction
== null
)
throw new NullPointerException();
int hash
= hash(key
);
Node
<K, V>[] tab
;
Node
<K, V> first
;
int n
, i
;
int binCount
= 0;
TreeNode
<K, V> t
= null
;
Node
<K, V> old
= null
;
if (size
> threshold
|| (tab
= table
) == null
||
(n
= tab
.length
) == 0)
n
= (tab
= resize()).length
;
if ((first
= tab
[i
= (n
- 1) & hash
]) != null
) {
if (first
instanceof TreeNode)
old
= (t
= (TreeNode
<K, V>) first
).getTreeNode(hash
, key
);
else {
Node
<K, V> e
= first
;
K k
;
do {
if (e
.hash
== hash
&&
((k
= e
.key
) == key
|| (key
!= null
&& key
.equals(k
)))) {
old
= e
;
break;
}
++binCount
;
} while ((e
= e
.next
) != null
);
}
}
if (old
!= null
) {
V v
;
if (old
.value
!= null
)
v
= remappingFunction
.apply(old
.value
, value
);
else
v
= value
;
if (v
!= null
) {
old
.value
= v
;
afterNodeAccess(old
);
} else
removeNode(hash
, key
, null
, false, true);
return v
;
}
if (value
!= null
) {
if (t
!= null
)
t
.putTreeVal(this, tab
, hash
, key
, value
);
else {
tab
[i
] = newNode(hash
, key
, value
, first
);
if (binCount
>= TREEIFY_THRESHOLD
- 1)
treeifyBin(tab
, hash
);
}
++modCount
;
++size
;
afterNodeInsertion(true);
}
return value
;
}
@Override
public void forEach(BiConsumer
<? super K
, ? super V
> action
) {
Node
<K, V>[] tab
;
if (action
== null
)
throw new NullPointerException();
if (size
> 0 && (tab
= table
) != null
) {
int mc
= modCount
;
for (int i
= 0; i
< tab
.length
; ++i
) {
for (Node
<K, V> e
= tab
[i
]; e
!= null
; e
= e
.next
)
action
.accept(e
.key
, e
.value
);
}
if (modCount
!= mc
)
throw new ConcurrentModificationException();
}
}
@Override
public void replaceAll(BiFunction
<? super K
, ? super V
, ? extends V> function
) {
Node
<K, V>[] tab
;
if (function
== null
)
throw new NullPointerException();
if (size
> 0 && (tab
= table
) != null
) {
int mc
= modCount
;
for (int i
= 0; i
< tab
.length
; ++i
) {
for (Node
<K, V> e
= tab
[i
]; e
!= null
; e
= e
.next
) {
e
.value
= function
.apply(e
.key
, e
.value
);
}
}
if (modCount
!= mc
)
throw new ConcurrentModificationException();
}
}
@SuppressWarnings("unchecked")
@Override
public Object
clone() {
MyHashMap
<K, V> result
;
try {
result
= (MyHashMap
<K, V>) super.clone();
} catch (CloneNotSupportedException e
) {
throw new InternalError(e
);
}
result
.reinitialize();
result
.putMapEntries(this, false);
return result
;
}
final float loadFactor() {
return loadFactor
;
}
final int capacity() {
return (table
!= null
) ? table
.length
:
(threshold
> 0) ? threshold
:
DEFAULT_INITIAL_CAPACITY
;
}
private void writeObject(java
.io
.ObjectOutputStream s
)
throws IOException
{
int buckets
= capacity();
s
.defaultWriteObject();
s
.writeInt(buckets
);
s
.writeInt(size
);
internalWriteEntries(s
);
}
private void readObject(java
.io
.ObjectInputStream s
)
throws IOException
, ClassNotFoundException
{
s
.defaultReadObject();
reinitialize();
if (loadFactor
<= 0 || Float
.isNaN(loadFactor
))
throw new InvalidObjectException("Illegal load factor: " +
loadFactor
);
s
.readInt();
int mappings
= s
.readInt();
if (mappings
< 0)
throw new InvalidObjectException("Illegal mappings count: " +
mappings
);
else if (mappings
> 0) {
float lf
= Math
.min(Math
.max(0.25f, loadFactor
), 4.0f);
float fc
= (float) mappings
/ lf
+ 1.0f;
int cap
= ((fc
< DEFAULT_INITIAL_CAPACITY
) ?
DEFAULT_INITIAL_CAPACITY
:
(fc
>= MAXIMUM_CAPACITY
) ?
MAXIMUM_CAPACITY
:
tableSizeFor((int) fc
));
float ft
= (float) cap
* lf
;
threshold
= ((cap
< MAXIMUM_CAPACITY
&& ft
< MAXIMUM_CAPACITY
) ?
(int) ft
: Integer
.MAX_VALUE
);
@SuppressWarnings({"rawtypes", "unchecked"})
Node
<K, V>[] tab
= (Node
<K, V>[]) new Node[cap
];
table
= tab
;
for (int i
= 0; i
< mappings
; i
++) {
@SuppressWarnings("unchecked")
K key
= (K
) s
.readObject();
@SuppressWarnings("unchecked")
V value
= (V
) s
.readObject();
putVal(hash(key
), key
, value
, false, false);
}
}
}
abstract class HashIterator {
Node
<K, V> next
;
Node
<K, V> current
;
int expectedModCount
;
int index
;
HashIterator() {
expectedModCount
= modCount
;
Node
<K, V>[] t
= table
;
current
= next
= null
;
index
= 0;
if (t
!= null
&& size
> 0) {
do {
} while (index
< t
.length
&& (next
= t
[index
++]) == null
);
}
}
public final boolean hasNext() {
return next
!= null
;
}
final Node
<K, V> nextNode() {
Node
<K, V>[] t
;
Node
<K, V> e
= next
;
if (modCount
!= expectedModCount
)
throw new ConcurrentModificationException();
if (e
== null
)
throw new NoSuchElementException();
if ((next
= (current
= e
).next
) == null
&& (t
= table
) != null
) {
do {
} while (index
< t
.length
&& (next
= t
[index
++]) == null
);
}
return e
;
}
public final void remove() {
Node
<K, V> p
= current
;
if (p
== null
)
throw new IllegalStateException();
if (modCount
!= expectedModCount
)
throw new ConcurrentModificationException();
current
= null
;
K key
= p
.key
;
removeNode(hash(key
), key
, null
, false, false);
expectedModCount
= modCount
;
}
}
final class KeyIterator extends HashIterator
implements Iterator<K> {
public final K
next() {
return nextNode().key
;
}
}
final class ValueIterator extends HashIterator
implements Iterator<V> {
public final V
next() {
return nextNode().value
;
}
}
final class EntryIterator extends HashIterator
implements Iterator<Entry
<K, V>> {
public final Entry
<K, V> next() {
return nextNode();
}
}
static class HashMapSpliterator<K, V> {
final MyHashMap
<K, V> map
;
Node
<K, V> current
;
int index
;
int fence
;
int est
;
int expectedModCount
;
HashMapSpliterator(MyHashMap
<K, V> m
, int origin
,
int fence
, int est
,
int expectedModCount
) {
this.map
= m
;
this.index
= origin
;
this.fence
= fence
;
this.est
= est
;
this.expectedModCount
= expectedModCount
;
}
final int getFence() {
int hi
;
if ((hi
= fence
) < 0) {
MyHashMap
<K, V> m
= map
;
est
= m
.size
;
expectedModCount
= m
.modCount
;
Node
<K, V>[] tab
= m
.table
;
hi
= fence
= (tab
== null
) ? 0 : tab
.length
;
}
return hi
;
}
public final long estimateSize() {
getFence();
return (long) est
;
}
}
static final class KeySpliterator<K, V>
extends HashMapSpliterator<K, V>
implements Spliterator<K> {
KeySpliterator(MyHashMap
<K, V> m
, int origin
, int fence
, int est
,
int expectedModCount
) {
super(m
, origin
, fence
, est
, expectedModCount
);
}
public KeySpliterator
<K, V> trySplit() {
int hi
= getFence(), lo
= index
, mid
= (lo
+ hi
) >>> 1;
return (lo
>= mid
|| current
!= null
) ? null
:
new KeySpliterator<>(map
, lo
, index
= mid
, est
>>>= 1,
expectedModCount
);
}
public void forEachRemaining(Consumer
<? super K
> action
) {
int i
, hi
, mc
;
if (action
== null
)
throw new NullPointerException();
MyHashMap
<K, V> m
= map
;
Node
<K, V>[] tab
= m
.table
;
if ((hi
= fence
) < 0) {
mc
= expectedModCount
= m
.modCount
;
hi
= fence
= (tab
== null
) ? 0 : tab
.length
;
} else
mc
= expectedModCount
;
if (tab
!= null
&& tab
.length
>= hi
&&
(i
= index
) >= 0 && (i
< (index
= hi
) || current
!= null
)) {
Node
<K, V> p
= current
;
current
= null
;
do {
if (p
== null
)
p
= tab
[i
++];
else {
action
.accept(p
.key
);
p
= p
.next
;
}
} while (p
!= null
|| i
< hi
);
if (m
.modCount
!= mc
)
throw new ConcurrentModificationException();
}
}
public boolean tryAdvance(Consumer
<? super K
> action
) {
int hi
;
if (action
== null
)
throw new NullPointerException();
Node
<K, V>[] tab
= map
.table
;
if (tab
!= null
&& tab
.length
>= (hi
= getFence()) && index
>= 0) {
while (current
!= null
|| index
< hi
) {
if (current
== null
)
current
= tab
[index
++];
else {
K k
= current
.key
;
current
= current
.next
;
action
.accept(k
);
if (map
.modCount
!= expectedModCount
)
throw new ConcurrentModificationException();
return true;
}
}
}
return false;
}
public int characteristics() {
return (fence
< 0 || est
== map
.size
? Spliterator
.SIZED
: 0) |
Spliterator
.DISTINCT
;
}
}
static final class ValueSpliterator<K, V>
extends HashMapSpliterator<K, V>
implements Spliterator<V> {
ValueSpliterator(MyHashMap
<K, V> m
, int origin
, int fence
, int est
,
int expectedModCount
) {
super(m
, origin
, fence
, est
, expectedModCount
);
}
public ValueSpliterator
<K, V> trySplit() {
int hi
= getFence(), lo
= index
, mid
= (lo
+ hi
) >>> 1;
return (lo
>= mid
|| current
!= null
) ? null
:
new ValueSpliterator<>(map
, lo
, index
= mid
, est
>>>= 1,
expectedModCount
);
}
public void forEachRemaining(Consumer
<? super V
> action
) {
int i
, hi
, mc
;
if (action
== null
)
throw new NullPointerException();
MyHashMap
<K, V> m
= map
;
Node
<K, V>[] tab
= m
.table
;
if ((hi
= fence
) < 0) {
mc
= expectedModCount
= m
.modCount
;
hi
= fence
= (tab
== null
) ? 0 : tab
.length
;
} else
mc
= expectedModCount
;
if (tab
!= null
&& tab
.length
>= hi
&&
(i
= index
) >= 0 && (i
< (index
= hi
) || current
!= null
)) {
Node
<K, V> p
= current
;
current
= null
;
do {
if (p
== null
)
p
= tab
[i
++];
else {
action
.accept(p
.value
);
p
= p
.next
;
}
} while (p
!= null
|| i
< hi
);
if (m
.modCount
!= mc
)
throw new ConcurrentModificationException();
}
}
public boolean tryAdvance(Consumer
<? super V
> action
) {
int hi
;
if (action
== null
)
throw new NullPointerException();
Node
<K, V>[] tab
= map
.table
;
if (tab
!= null
&& tab
.length
>= (hi
= getFence()) && index
>= 0) {
while (current
!= null
|| index
< hi
) {
if (current
== null
)
current
= tab
[index
++];
else {
V v
= current
.value
;
current
= current
.next
;
action
.accept(v
);
if (map
.modCount
!= expectedModCount
)
throw new ConcurrentModificationException();
return true;
}
}
}
return false;
}
public int characteristics() {
return (fence
< 0 || est
== map
.size
? Spliterator
.SIZED
: 0);
}
}
static final class EntrySpliterator<K, V>
extends HashMapSpliterator<K, V>
implements Spliterator<Entry
<K, V>> {
EntrySpliterator(MyHashMap
<K, V> m
, int origin
, int fence
, int est
,
int expectedModCount
) {
super(m
, origin
, fence
, est
, expectedModCount
);
}
public EntrySpliterator
<K, V> trySplit() {
int hi
= getFence(), lo
= index
, mid
= (lo
+ hi
) >>> 1;
return (lo
>= mid
|| current
!= null
) ? null
:
new EntrySpliterator<>(map
, lo
, index
= mid
, est
>>>= 1,
expectedModCount
);
}
public void forEachRemaining(Consumer
<? super Entry
<K, V>> action
) {
int i
, hi
, mc
;
if (action
== null
)
throw new NullPointerException();
MyHashMap
<K, V> m
= map
;
Node
<K, V>[] tab
= m
.table
;
if ((hi
= fence
) < 0) {
mc
= expectedModCount
= m
.modCount
;
hi
= fence
= (tab
== null
) ? 0 : tab
.length
;
} else
mc
= expectedModCount
;
if (tab
!= null
&& tab
.length
>= hi
&&
(i
= index
) >= 0 && (i
< (index
= hi
) || current
!= null
)) {
Node
<K, V> p
= current
;
current
= null
;
do {
if (p
== null
)
p
= tab
[i
++];
else {
action
.accept(p
);
p
= p
.next
;
}
} while (p
!= null
|| i
< hi
);
if (m
.modCount
!= mc
)
throw new ConcurrentModificationException();
}
}
public boolean tryAdvance(Consumer
<? super Entry
<K, V>> action
) {
int hi
;
if (action
== null
)
throw new NullPointerException();
Node
<K, V>[] tab
= map
.table
;
if (tab
!= null
&& tab
.length
>= (hi
= getFence()) && index
>= 0) {
while (current
!= null
|| index
< hi
) {
if (current
== null
)
current
= tab
[index
++];
else {
Node
<K, V> e
= current
;
current
= current
.next
;
action
.accept(e
);
if (map
.modCount
!= expectedModCount
)
throw new ConcurrentModificationException();
return true;
}
}
}
return false;
}
public int characteristics() {
return (fence
< 0 || est
== map
.size
? Spliterator
.SIZED
: 0) |
Spliterator
.DISTINCT
;
}
}
Node
<K, V> newNode(int hash
, K key
, V value
, Node
<K, V> next
) {
return new Node<>(hash
, key
, value
, next
);
}
Node
<K, V> replacementNode(Node
<K, V> p
, Node
<K, V> next
) {
return new Node<>(p
.hash
, p
.key
, p
.value
, next
);
}
TreeNode
<K, V> newTreeNode(int hash
, K key
, V value
, Node
<K, V> next
) {
return new TreeNode<>(hash
, key
, value
, next
);
}
TreeNode
<K, V> replacementTreeNode(Node
<K, V> p
, Node
<K, V> next
) {
return new TreeNode<>(p
.hash
, p
.key
, p
.value
, next
);
}
void reinitialize() {
table
= null
;
entrySet
= null
;
keySet
= null
;
values
= null
;
modCount
= 0;
threshold
= 0;
size
= 0;
}
void afterNodeAccess(MyHashMap
.Node
<K, V> p
) {
}
void afterNodeInsertion(boolean evict
) {
}
void afterNodeRemoval(Node
<K, V> p
) {
}
void internalWriteEntries(java
.io
.ObjectOutputStream s
) throws IOException
{
Node
<K, V>[] tab
;
if (size
> 0 && (tab
= table
) != null
) {
for (int i
= 0; i
< tab
.length
; ++i
) {
for (Node
<K, V> e
= tab
[i
]; e
!= null
; e
= e
.next
) {
s
.writeObject(e
.key
);
s
.writeObject(e
.value
);
}
}
}
}
static final class TreeNode<K, V> extends MyLinkedHashMap.Entry<K, V> {
TreeNode
<K, V> parent
;
TreeNode
<K, V> left
;
TreeNode
<K, V> right
;
TreeNode
<K, V> prev
;
boolean red
;
TreeNode(int hash
, K key
, V val
, Node
<K, V> next
) {
super(hash
, key
, val
, next
);
}
final TreeNode
<K, V> root() {
for (TreeNode
<K, V> r
= this, p
; ; ) {
if ((p
= r
.parent
) == null
)
return r
;
r
= p
;
}
}
static <K, V> void moveRootToFront(Node
<K, V>[] tab
, TreeNode
<K, V> root
) {
int n
;
if (root
!= null
&& tab
!= null
&& (n
= tab
.length
) > 0) {
int index
= (n
- 1) & root
.hash
;
TreeNode
<K, V> first
= (TreeNode
<K, V>) tab
[index
];
if (root
!= first
) {
Node
<K, V> rn
;
tab
[index
] = root
;
TreeNode
<K, V> rp
= root
.prev
;
if ((rn
= root
.next
) != null
)
((TreeNode
<K, V>) rn
).prev
= rp
;
if (rp
!= null
)
rp
.next
= rn
;
if (first
!= null
)
first
.prev
= root
;
root
.next
= first
;
root
.prev
= null
;
}
assert checkInvariants(root
);
}
}
final TreeNode
<K, V> find(int h
, Object k
, Class
<?> kc
) {
TreeNode
<K, V> p
= this;
do {
int ph
, dir
;
K pk
;
TreeNode
<K, V> pl
= p
.left
, pr
= p
.right
, q
;
if ((ph
= p
.hash
) > h
)
p
= pl
;
else if (ph
< h
)
p
= pr
;
else if ((pk
= p
.key
) == k
|| (k
!= null
&& k
.equals(pk
)))
return p
;
else if (pl
== null
)
p
= pr
;
else if (pr
== null
)
p
= pl
;
else if ((kc
!= null
||
(kc
= comparableClassFor(k
)) != null
) &&
(dir
= compareComparables(kc
, k
, pk
)) != 0)
p
= (dir
< 0) ? pl
: pr
;
else if ((q
= pr
.find(h
, k
, kc
)) != null
)
return q
;
else
p
= pl
;
} while (p
!= null
);
return null
;
}
final TreeNode
<K, V> getTreeNode(int h
, Object k
) {
return ((parent
!= null
) ? root() : this).find(h
, k
, null
);
}
static int tieBreakOrder(Object a
, Object b
) {
int d
;
if (a
== null
|| b
== null
||
(d
= a
.getClass().getName().
compareTo(b
.getClass().getName())) == 0)
d
= (System
.identityHashCode(a
) <= System
.identityHashCode(b
) ?
-1 : 1);
return d
;
}
final void treeify(Node
<K, V>[] tab
) {
TreeNode
<K, V> root
= null
;
for (TreeNode
<K, V> x
= this, next
; x
!= null
; x
= next
) {
next
= (TreeNode
<K, V>) x
.next
;
x
.left
= x
.right
= null
;
if (root
== null
) {
x
.parent
= null
;
x
.red
= false;
root
= x
;
} else {
K k
= x
.key
;
int h
= x
.hash
;
Class
<?> kc
= null
;
for (TreeNode
<K, V> p
= root
; ; ) {
int dir
, ph
;
K pk
= p
.key
;
if ((ph
= p
.hash
) > h
)
dir
= -1;
else if (ph
< h
)
dir
= 1;
else if ((kc
== null
&&
(kc
= comparableClassFor(k
)) == null
) ||
(dir
= compareComparables(kc
, k
, pk
)) == 0)
dir
= tieBreakOrder(k
, pk
);
TreeNode
<K, V> xp
= p
;
if ((p
= (dir
<= 0) ? p
.left
: p
.right
) == null
) {
x
.parent
= xp
;
if (dir
<= 0)
xp
.left
= x
;
else
xp
.right
= x
;
root
= balanceInsertion(root
, x
);
break;
}
}
}
}
moveRootToFront(tab
, root
);
}
final Node
<K, V> untreeify(MyHashMap
<K, V> map
) {
Node
<K, V> hd
= null
, tl
= null
;
for (Node
<K, V> q
= this; q
!= null
; q
= q
.next
) {
Node
<K, V> p
= map
.replacementNode(q
, null
);
if (tl
== null
)
hd
= p
;
else
tl
.next
= p
;
tl
= p
;
}
return hd
;
}
final TreeNode
<K, V> putTreeVal(MyHashMap
<K, V> map
, Node
<K, V>[] tab
,
int h
, K k
, V v
) {
Class
<?> kc
= null
;
boolean searched
= false;
TreeNode
<K, V> root
= (parent
!= null
) ? root() : this;
for (TreeNode
<K, V> p
= root
; ; ) {
int dir
, ph
;
K pk
;
if ((ph
= p
.hash
) > h
)
dir
= -1;
else if (ph
< h
)
dir
= 1;
else if ((pk
= p
.key
) == k
|| (k
!= null
&& k
.equals(pk
)))
return p
;
else if ((kc
== null
&&
(kc
= comparableClassFor(k
)) == null
) ||
(dir
= compareComparables(kc
, k
, pk
)) == 0) {
if (!searched
) {
TreeNode
<K, V> q
, ch
;
searched
= true;
if (((ch
= p
.left
) != null
&&
(q
= ch
.find(h
, k
, kc
)) != null
) ||
((ch
= p
.right
) != null
&&
(q
= ch
.find(h
, k
, kc
)) != null
))
return q
;
}
dir
= tieBreakOrder(k
, pk
);
}
TreeNode
<K, V> xp
= p
;
if ((p
= (dir
<= 0) ? p
.left
: p
.right
) == null
) {
Node
<K, V> xpn
= xp
.next
;
TreeNode
<K, V> x
= map
.newTreeNode(h
, k
, v
, xpn
);
if (dir
<= 0)
xp
.left
= x
;
else
xp
.right
= x
;
xp
.next
= x
;
x
.parent
= x
.prev
= xp
;
if (xpn
!= null
)
((TreeNode
<K, V>) xpn
).prev
= x
;
moveRootToFront(tab
, balanceInsertion(root
, x
));
return null
;
}
}
}
final void removeTreeNode(MyHashMap
<K, V> map
, Node
<K, V>[] tab
,
boolean movable
) {
int n
;
if (tab
== null
|| (n
= tab
.length
) == 0)
return;
int index
= (n
- 1) & hash
;
TreeNode
<K, V> first
= (TreeNode
<K, V>) tab
[index
], root
= first
, rl
;
TreeNode
<K, V> succ
= (TreeNode
<K, V>) next
, pred
= prev
;
if (pred
== null
)
tab
[index
] = first
= succ
;
else
pred
.next
= succ
;
if (succ
!= null
)
succ
.prev
= pred
;
if (first
== null
)
return;
if (root
.parent
!= null
)
root
= root
.root();
if (root
== null
|| root
.right
== null
||
(rl
= root
.left
) == null
|| rl
.left
== null
) {
tab
[index
] = first
.untreeify(map
);
return;
}
TreeNode
<K, V> p
= this, pl
= left
, pr
= right
, replacement
;
if (pl
!= null
&& pr
!= null
) {
TreeNode
<K, V> s
= pr
, sl
;
while ((sl
= s
.left
) != null
)
s
= sl
;
boolean c
= s
.red
;
s
.red
= p
.red
;
p
.red
= c
;
TreeNode
<K, V> sr
= s
.right
;
TreeNode
<K, V> pp
= p
.parent
;
if (s
== pr
) {
p
.parent
= s
;
s
.right
= p
;
} else {
TreeNode
<K, V> sp
= s
.parent
;
if ((p
.parent
= sp
) != null
) {
if (s
== sp
.left
)
sp
.left
= p
;
else
sp
.right
= p
;
}
if ((s
.right
= pr
) != null
)
pr
.parent
= s
;
}
p
.left
= null
;
if ((p
.right
= sr
) != null
)
sr
.parent
= p
;
if ((s
.left
= pl
) != null
)
pl
.parent
= s
;
if ((s
.parent
= pp
) == null
)
root
= s
;
else if (p
== pp
.left
)
pp
.left
= s
;
else
pp
.right
= s
;
if (sr
!= null
)
replacement
= sr
;
else
replacement
= p
;
} else if (pl
!= null
)
replacement
= pl
;
else if (pr
!= null
)
replacement
= pr
;
else
replacement
= p
;
if (replacement
!= p
) {
TreeNode
<K, V> pp
= replacement
.parent
= p
.parent
;
if (pp
== null
)
root
= replacement
;
else if (p
== pp
.left
)
pp
.left
= replacement
;
else
pp
.right
= replacement
;
p
.left
= p
.right
= p
.parent
= null
;
}
TreeNode
<K, V> r
= p
.red
? root
: balanceDeletion(root
, replacement
);
if (replacement
== p
) {
TreeNode
<K, V> pp
= p
.parent
;
p
.parent
= null
;
if (pp
!= null
) {
if (p
== pp
.left
)
pp
.left
= null
;
else if (p
== pp
.right
)
pp
.right
= null
;
}
}
if (movable
)
moveRootToFront(tab
, r
);
}
final void split(MyHashMap
<K, V> map
, Node
<K, V>[] tab
, int index
, int bit
) {
TreeNode
<K, V> b
= this;
TreeNode
<K, V> loHead
= null
, loTail
= null
;
TreeNode
<K, V> hiHead
= null
, hiTail
= null
;
int lc
= 0, hc
= 0;
for (TreeNode
<K, V> e
= b
, next
; e
!= null
; e
= next
) {
next
= (TreeNode
<K, V>) e
.next
;
e
.next
= null
;
if ((e
.hash
& bit
) == 0) {
if ((e
.prev
= loTail
) == null
)
loHead
= e
;
else
loTail
.next
= e
;
loTail
= e
;
++lc
;
} else {
if ((e
.prev
= hiTail
) == null
)
hiHead
= e
;
else
hiTail
.next
= e
;
hiTail
= e
;
++hc
;
}
}
if (loHead
!= null
) {
if (lc
<= UNTREEIFY_THRESHOLD
)
tab
[index
] = loHead
.untreeify(map
);
else {
tab
[index
] = loHead
;
if (hiHead
!= null
)
loHead
.treeify(tab
);
}
}
if (hiHead
!= null
) {
if (hc
<= UNTREEIFY_THRESHOLD
)
tab
[index
+ bit
] = hiHead
.untreeify(map
);
else {
tab
[index
+ bit
] = hiHead
;
if (loHead
!= null
)
hiHead
.treeify(tab
);
}
}
}
static <K, V> TreeNode
<K, V> rotateLeft(TreeNode
<K, V> root
,
TreeNode
<K, V> p
) {
TreeNode
<K, V> r
, pp
, rl
;
if (p
!= null
&& (r
= p
.right
) != null
) {
if ((rl
= p
.right
= r
.left
) != null
)
rl
.parent
= p
;
if ((pp
= r
.parent
= p
.parent
) == null
)
(root
= r
).red
= false;
else if (pp
.left
== p
)
pp
.left
= r
;
else
pp
.right
= r
;
r
.left
= p
;
p
.parent
= r
;
}
return root
;
}
static <K, V> TreeNode
<K, V> rotateRight(TreeNode
<K, V> root
,
TreeNode
<K, V> p
) {
TreeNode
<K, V> l
, pp
, lr
;
if (p
!= null
&& (l
= p
.left
) != null
) {
if ((lr
= p
.left
= l
.right
) != null
)
lr
.parent
= p
;
if ((pp
= l
.parent
= p
.parent
) == null
)
(root
= l
).red
= false;
else if (pp
.right
== p
)
pp
.right
= l
;
else
pp
.left
= l
;
l
.right
= p
;
p
.parent
= l
;
}
return root
;
}
static <K, V> TreeNode
<K, V> balanceInsertion(TreeNode
<K, V> root
,
TreeNode
<K, V> x
) {
x
.red
= true;
for (TreeNode
<K, V> xp
, xpp
, xppl
, xppr
; ; ) {
if ((xp
= x
.parent
) == null
) {
x
.red
= false;
return x
;
} else if (!xp
.red
|| (xpp
= xp
.parent
) == null
)
return root
;
if (xp
== (xppl
= xpp
.left
)) {
if ((xppr
= xpp
.right
) != null
&& xppr
.red
) {
xppr
.red
= false;
xp
.red
= false;
xpp
.red
= true;
x
= xpp
;
} else {
if (x
== xp
.right
) {
root
= rotateLeft(root
, x
= xp
);
xpp
= (xp
= x
.parent
) == null
? null
: xp
.parent
;
}
if (xp
!= null
) {
xp
.red
= false;
if (xpp
!= null
) {
xpp
.red
= true;
root
= rotateRight(root
, xpp
);
}
}
}
} else {
if (xppl
!= null
&& xppl
.red
) {
xppl
.red
= false;
xp
.red
= false;
xpp
.red
= true;
x
= xpp
;
} else {
if (x
== xp
.left
) {
root
= rotateRight(root
, x
= xp
);
xpp
= (xp
= x
.parent
) == null
? null
: xp
.parent
;
}
if (xp
!= null
) {
xp
.red
= false;
if (xpp
!= null
) {
xpp
.red
= true;
root
= rotateLeft(root
, xpp
);
}
}
}
}
}
}
static <K, V> TreeNode
<K, V> balanceDeletion(TreeNode
<K, V> root
,
TreeNode
<K, V> x
) {
for (TreeNode
<K, V> xp
, xpl
, xpr
; ; ) {
if (x
== null
|| x
== root
)
return root
;
else if ((xp
= x
.parent
) == null
) {
x
.red
= false;
return x
;
} else if (x
.red
) {
x
.red
= false;
return root
;
} else if ((xpl
= xp
.left
) == x
) {
if ((xpr
= xp
.right
) != null
&& xpr
.red
) {
xpr
.red
= false;
xp
.red
= true;
root
= rotateLeft(root
, xp
);
xpr
= (xp
= x
.parent
) == null
? null
: xp
.right
;
}
if (xpr
== null
)
x
= xp
;
else {
TreeNode
<K, V> sl
= xpr
.left
, sr
= xpr
.right
;
if ((sr
== null
|| !sr
.red
) &&
(sl
== null
|| !sl
.red
)) {
xpr
.red
= true;
x
= xp
;
} else {
if (sr
== null
|| !sr
.red
) {
if (sl
!= null
)
sl
.red
= false;
xpr
.red
= true;
root
= rotateRight(root
, xpr
);
xpr
= (xp
= x
.parent
) == null
?
null
: xp
.right
;
}
if (xpr
!= null
) {
xpr
.red
= (xp
== null
) ? false : xp
.red
;
if ((sr
= xpr
.right
) != null
)
sr
.red
= false;
}
if (xp
!= null
) {
xp
.red
= false;
root
= rotateLeft(root
, xp
);
}
x
= root
;
}
}
} else {
if (xpl
!= null
&& xpl
.red
) {
xpl
.red
= false;
xp
.red
= true;
root
= rotateRight(root
, xp
);
xpl
= (xp
= x
.parent
) == null
? null
: xp
.left
;
}
if (xpl
== null
)
x
= xp
;
else {
TreeNode
<K, V> sl
= xpl
.left
, sr
= xpl
.right
;
if ((sl
== null
|| !sl
.red
) &&
(sr
== null
|| !sr
.red
)) {
xpl
.red
= true;
x
= xp
;
} else {
if (sl
== null
|| !sl
.red
) {
if (sr
!= null
)
sr
.red
= false;
xpl
.red
= true;
root
= rotateLeft(root
, xpl
);
xpl
= (xp
= x
.parent
) == null
?
null
: xp
.left
;
}
if (xpl
!= null
) {
xpl
.red
= (xp
== null
) ? false : xp
.red
;
if ((sl
= xpl
.left
) != null
)
sl
.red
= false;
}
if (xp
!= null
) {
xp
.red
= false;
root
= rotateRight(root
, xp
);
}
x
= root
;
}
}
}
}
}
static <K, V> boolean checkInvariants(TreeNode
<K, V> t
) {
TreeNode
<K, V> tp
= t
.parent
, tl
= t
.left
, tr
= t
.right
,
tb
= t
.prev
, tn
= (TreeNode
<K, V>) t
.next
;
if (tb
!= null
&& tb
.next
!= t
)
return false;
if (tn
!= null
&& tn
.prev
!= t
)
return false;
if (tp
!= null
&& t
!= tp
.left
&& t
!= tp
.right
)
return false;
if (tl
!= null
&& (tl
.parent
!= t
|| tl
.hash
> t
.hash
))
return false;
if (tr
!= null
&& (tr
.parent
!= t
|| tr
.hash
< t
.hash
))
return false;
if (t
.red
&& tl
!= null
&& tl
.red
&& tr
!= null
&& tr
.red
)
return false;
if (tl
!= null
&& !checkInvariants(tl
))
return false;
if (tr
!= null
&& !checkInvariants(tr
))
return false;
return true;
}
}
public int getSize() {
return table
.length
;
}
public void print() {
for (int a
= 0; a
< table
.length
; a
++) {
System
.out
.print("第" + a
+ "个链表:");
print(table
[a
]);
System
.out
.println();
}
}
void print(Node
<K, V> kvNode
) {
if (kvNode
!= null
) {
System
.out
.print(kvNode
.key
+ " ");
print(kvNode
.next
);
}
}
}
MyLinkedHashMap
package com
.map
;
import java
.util
.*
;
import java
.util
.function
.Consumer
;
import java
.util
.function
.BiConsumer
;
import java
.util
.function
.BiFunction
;
import java
.io
.IOException
;
@SuppressWarnings("all")
public class MyLinkedHashMap<K, V>
extends MyHashMap<K, V>
implements Map<K, V> {
static class Entry<K, V> extends MyHashMap.Node<K, V> {
Entry
<K, V> before
, after
;
Entry(int hash
, K key
, V value
, MyHashMap
.Node
<K, V> next
) {
super(hash
, key
, value
, next
);
}
}
private static final long serialVersionUID
= 3801124242820219131L
;
transient MyLinkedHashMap
.Entry
<K, V> head
;
transient MyLinkedHashMap
.Entry
<K, V> tail
;
final boolean accessOrder
;
private void linkNodeLast(MyLinkedHashMap
.Entry
<K, V> p
) {
MyLinkedHashMap
.Entry
<K, V> last
= tail
;
tail
= p
;
if (last
== null
) {
head
= p
;
} else {
p
.before
= last
;
last
.after
= p
;
}
}
private void transferLinks(MyLinkedHashMap
.Entry
<K, V> src
,
MyLinkedHashMap
.Entry
<K, V> dst
) {
MyLinkedHashMap
.Entry
<K, V> b
= dst
.before
= src
.before
;
MyLinkedHashMap
.Entry
<K, V> a
= dst
.after
= src
.after
;
if (b
== null
)
head
= dst
;
else
b
.after
= dst
;
if (a
== null
)
tail
= dst
;
a
.before
= dst
;
}
void reinitialize() {
super.reinitialize();
head
= tail
= null
;
}
MyHashMap
.Node
<K, V> newNode(int hash
, K key
, V value
, MyHashMap
.Node
<K, V> e
) {
MyLinkedHashMap
.Entry
<K, V> p
=
new MyLinkedHashMap.Entry<K, V>(hash
, key
, value
, e
);
linkNodeLast(p
);
return p
;
}
MyHashMap
.Node
<K, V> replacementNode(MyHashMap
.Node
<K, V> p
, MyHashMap
.Node
<K, V> next
) {
MyLinkedHashMap
.Entry
<K, V> q
= (MyLinkedHashMap
.Entry
<K, V>) p
;
MyLinkedHashMap
.Entry
<K, V> t
=
new MyLinkedHashMap.Entry<K, V>(q
.hash
, q
.key
, q
.value
, next
);
transferLinks(q
, t
);
return t
;
}
MyHashMap
.TreeNode
<K, V> newTreeNode(int hash
, K key
, V value
, MyHashMap
.Node
<K, V> next
) {
MyHashMap
.TreeNode
<K, V> p
= new MyHashMap.TreeNode<K, V>(hash
, key
, value
, next
);
linkNodeLast(p
);
return p
;
}
MyHashMap
.TreeNode
<K, V> replacementTreeNode(MyHashMap
.Node
<K, V> p
, MyHashMap
.Node
<K, V> next
) {
MyLinkedHashMap
.Entry
<K, V> q
= (MyLinkedHashMap
.Entry
<K, V>) p
;
MyHashMap
.TreeNode
<K, V> t
= new MyHashMap.TreeNode<K, V>(q
.hash
, q
.key
, q
.value
, next
);
transferLinks(q
, t
);
return t
;
}
void afterNodeRemoval(MyHashMap
.Node
<K, V> e
) {
MyLinkedHashMap
.Entry
<K, V> p
=
(MyLinkedHashMap
.Entry
<K, V>) e
, b
= p
.before
, a
= p
.after
;
p
.before
= p
.after
= null
;
if (b
== null
)
head
= a
;
else
b
.after
= a
;
if (a
== null
)
tail
= b
;
else
a
.before
= b
;
}
void afterNodeInsertion(boolean evict
) {
MyLinkedHashMap
.Entry
<K, V> first
;
if (evict
&& (first
= head
) != null
&& removeEldestEntry(first
)) {
K key
= first
.key
;
removeNode(hash(key
), key
, null
, false, true);
}
}
void afterNodeAccess(MyHashMap
.Node
<K, V> e
) {
MyLinkedHashMap
.Entry
<K, V> last
;
if (accessOrder
&& (last
= tail
) != e
) {
MyLinkedHashMap
.Entry
<K, V> p
=
(MyLinkedHashMap
.Entry
<K, V>) e
, b
= p
.before
, a
= p
.after
;
p
.after
= null
;
if (b
== null
)
head
= a
;
else
b
.after
= a
;
if (a
!= null
)
a
.before
= b
;
else
last
= b
;
if (last
== null
)
head
= p
;
else {
p
.before
= last
;
last
.after
= p
;
}
tail
= p
;
++modCount
;
}
}
void internalWriteEntries(java
.io
.ObjectOutputStream s
) throws IOException
{
for (MyLinkedHashMap
.Entry
<K, V> e
= head
; e
!= null
; e
= e
.after
) {
s
.writeObject(e
.key
);
s
.writeObject(e
.value
);
}
}
public MyLinkedHashMap(int initialCapacity
, float loadFactor
) {
super(initialCapacity
, loadFactor
);
accessOrder
= false;
}
public MyLinkedHashMap(int initialCapacity
) {
super(initialCapacity
);
accessOrder
= false;
}
public MyLinkedHashMap() {
super();
accessOrder
= false;
}
public MyLinkedHashMap(Map
<? extends K, ? extends V> m
) {
super();
accessOrder
= false;
putMapEntries(m
, false);
}
public MyLinkedHashMap(int initialCapacity
,
float loadFactor
,
boolean accessOrder
) {
super(initialCapacity
, loadFactor
);
this.accessOrder
= accessOrder
;
}
public boolean containsValue(Object value
) {
for (MyLinkedHashMap
.Entry
<K, V> e
= head
; e
!= null
; e
= e
.after
) {
V v
= e
.value
;
if (v
== value
|| (value
!= null
&& value
.equals(v
)))
return true;
}
return false;
}
public V
get(Object key
) {
MyHashMap
.Node
<K, V> e
;
if ((e
= getNode(hash(key
), key
)) == null
)
return null
;
if (accessOrder
)
afterNodeAccess(e
);
return e
.value
;
}
public V
getOrDefault(Object key
, V defaultValue
) {
MyHashMap
.Node
<K, V> e
;
if ((e
= getNode(hash(key
), key
)) == null
)
return defaultValue
;
if (accessOrder
)
afterNodeAccess(e
);
return e
.value
;
}
public void clear() {
super.clear();
head
= tail
= null
;
}
protected boolean removeEldestEntry(Map
.Entry
<K, V> eldest
) {
return false;
}
public Set
<K> keySet() {
Set
<K> ks
= keySet
;
if (ks
== null
) {
ks
= new LinkedKeySet();
keySet
= ks
;
}
return ks
;
}
final class LinkedKeySet extends AbstractSet<K> {
public final int size() {
return size
;
}
public final void clear() {
MyLinkedHashMap
.this.clear();
}
public final Iterator
<K> iterator() {
return new LinkedKeyIterator();
}
public final boolean contains(Object o
) {
return containsKey(o
);
}
public final boolean remove(Object key
) {
return removeNode(hash(key
), key
, null
, false, true) != null
;
}
public final Spliterator
<K> spliterator() {
return Spliterators
.spliterator(this, Spliterator
.SIZED
|
Spliterator
.ORDERED
|
Spliterator
.DISTINCT
);
}
public final void forEach(Consumer
<? super K
> action
) {
if (action
== null
)
throw new NullPointerException();
int mc
= modCount
;
for (MyLinkedHashMap
.Entry
<K, V> e
= head
; e
!= null
; e
= e
.after
)
action
.accept(e
.key
);
if (modCount
!= mc
)
throw new ConcurrentModificationException();
}
}
public Collection
<V> values() {
Collection
<V> vs
= values
;
if (vs
== null
) {
vs
= new LinkedValues();
values
= vs
;
}
return vs
;
}
final class LinkedValues extends AbstractCollection<V> {
public final int size() {
return size
;
}
public final void clear() {
MyLinkedHashMap
.this.clear();
}
public final Iterator
<V> iterator() {
return new LinkedValueIterator();
}
public final boolean contains(Object o
) {
return containsValue(o
);
}
public final Spliterator
<V> spliterator() {
return Spliterators
.spliterator(this, Spliterator
.SIZED
|
Spliterator
.ORDERED
);
}
public final void forEach(Consumer
<? super V
> action
) {
if (action
== null
)
throw new NullPointerException();
int mc
= modCount
;
for (MyLinkedHashMap
.Entry
<K, V> e
= head
; e
!= null
; e
= e
.after
)
action
.accept(e
.value
);
if (modCount
!= mc
)
throw new ConcurrentModificationException();
}
}
public Set
<Map
.Entry
<K, V>> entrySet() {
Set
<Map
.Entry
<K, V>> es
;
return (es
= entrySet
) == null
? (entrySet
= new LinkedEntrySet()) : es
;
}
final class LinkedEntrySet extends AbstractSet<Map
.Entry
<K, V>> {
public final int size() {
return size
;
}
public final void clear() {
MyLinkedHashMap
.this.clear();
}
public final Iterator
<Map
.Entry
<K, V>> iterator() {
return new LinkedEntryIterator();
}
public final boolean contains(Object o
) {
if (!(o
instanceof Map.Entry))
return false;
Map
.Entry
<?, ?> e
= (Map
.Entry
<?, ?>) o
;
Object key
= e
.getKey();
MyHashMap
.Node
<K, V> candidate
= getNode(hash(key
), key
);
return candidate
!= null
&& candidate
.equals(e
);
}
public final boolean remove(Object o
) {
if (o
instanceof Map.Entry) {
Map
.Entry
<?, ?> e
= (Map
.Entry
<?, ?>) o
;
Object key
= e
.getKey();
Object value
= e
.getValue();
return removeNode(hash(key
), key
, value
, true, true) != null
;
}
return false;
}
public final Spliterator
<Map
.Entry
<K, V>> spliterator() {
return Spliterators
.spliterator(this, Spliterator
.SIZED
|
Spliterator
.ORDERED
|
Spliterator
.DISTINCT
);
}
public final void forEach(Consumer
<? super Map
.Entry
<K, V>> action
) {
if (action
== null
)
throw new NullPointerException();
int mc
= modCount
;
for (MyLinkedHashMap
.Entry
<K, V> e
= head
; e
!= null
; e
= e
.after
)
action
.accept(e
);
if (modCount
!= mc
)
throw new ConcurrentModificationException();
}
}
public void forEach(BiConsumer
<? super K
, ? super V
> action
) {
if (action
== null
)
throw new NullPointerException();
int mc
= modCount
;
for (MyLinkedHashMap
.Entry
<K, V> e
= head
; e
!= null
; e
= e
.after
)
action
.accept(e
.key
, e
.value
);
if (modCount
!= mc
)
throw new ConcurrentModificationException();
}
public void replaceAll(BiFunction
<? super K
, ? super V
, ? extends V> function
) {
if (function
== null
)
throw new NullPointerException();
int mc
= modCount
;
for (MyLinkedHashMap
.Entry
<K, V> e
= head
; e
!= null
; e
= e
.after
)
e
.value
= function
.apply(e
.key
, e
.value
);
if (modCount
!= mc
)
throw new ConcurrentModificationException();
}
abstract class LinkedHashIterator {
MyLinkedHashMap
.Entry
<K, V> next
;
MyLinkedHashMap
.Entry
<K, V> current
;
int expectedModCount
;
LinkedHashIterator() {
next
= head
;
expectedModCount
= modCount
;
current
= null
;
}
public final boolean hasNext() {
return next
!= null
;
}
final MyLinkedHashMap
.Entry
<K, V> nextNode() {
MyLinkedHashMap
.Entry
<K, V> e
= next
;
if (modCount
!= expectedModCount
)
throw new ConcurrentModificationException();
if (e
== null
)
throw new NoSuchElementException();
current
= e
;
next
= e
.after
;
return e
;
}
public final void remove() {
MyHashMap
.Node
<K, V> p
= current
;
if (p
== null
)
throw new IllegalStateException();
if (modCount
!= expectedModCount
)
throw new ConcurrentModificationException();
current
= null
;
K key
= p
.key
;
removeNode(hash(key
), key
, null
, false, false);
expectedModCount
= modCount
;
}
}
final class LinkedKeyIterator extends LinkedHashIterator
implements Iterator<K> {
public final K
next() {
return nextNode().getKey();
}
}
final class LinkedValueIterator extends LinkedHashIterator
implements Iterator<V> {
public final V
next() {
return nextNode().value
;
}
}
final class LinkedEntryIterator extends LinkedHashIterator
implements Iterator<Map
.Entry
<K, V>> {
public final Map
.Entry
<K, V> next() {
return nextNode();
}
}
public void test() {
Node node
= null
;
for (int a
= 0; a
< 10; a
++) {
Node node1
= new Node(a
, a
, a
, node
);
node
= node1
;
}
while (node
.next
!= null
) {
System
.out
.println(node
.key
);
node
= node
.next
;
}
afterNodeAccess(node
);
while (node
.next
!= null
) {
System
.out
.println(node
.key
);
node
= node
.next
;
}
}
}
因为java的类加载器存在双亲委派机制,如果说再写一个HashMap的话会优先加载原有的,自己写的如果在同一个包下是不会加载的且会报错,当然也可以放在不同的包下,但是因为名称相同,且里面有静态方法,稍微不注意就会出现包导入错误的情况,为了避免这些问题便于区分,写一个自己的map是比较保险的,而HashMap和LinkedHashMap之间有相互调用,所以两个都必须同时重写。
新增方法:
用于查看map中的数组长度和查看map中每个链表的内容
//用于查看map中当前数组长度
public int getSize() {
return table.length;
}
//用于打印map中每个链表的内容
public void print() {
for (int a = 0; a < table.length; a++) {
System.out.print("第" + a + "个链表:");
print(table[a]);
System.out.println();
}
}
void print(Node<K, V> kvNode) {
if (kvNode != null) {
System.out.print(kvNode.key + " ");
print(kvNode.next);
}
}
探索方法:
1.put方法
final V
putVal(int hash
, K key
, V value
, boolean onlyIfAbsent
, boolean evict
) {
Node
<K, V>[] tab
;
Node
<K, V> p
;
int n
, i
;
if ((tab
= table
) == null
|| (n
= tab
.length
) == 0) {
n
= (tab
= resize()).length
;
}
int i1
= n
- 1;
i
= i1
& hash
;
if ((p
= tab
[i
]) == null
) {
tab
[i
] = newNode(hash
, key
, value
, null
);
} else {
MyHashMap
.Node
<K, V> e
;
K k
;
if (p
.hash
== hash
&& ((k
= p
.key
) == key
|| (key
!= null
&& key
.equals(k
))))
e
= p
;
else if (p
instanceof MyHashMap.TreeNode)
e
= ((MyHashMap
.TreeNode
<K, V>) p
).putTreeVal(this, tab
, hash
, key
, value
);
else {
for (int binCount
= 0; ; ++binCount
) {
if ((e
= p
.next
) == null
) {
p
.next
= newNode(hash
, key
, value
, null
);
if (binCount
>= TREEIFY_THRESHOLD
- 1)
treeifyBin(tab
, hash
);
break;
}
if (e
.hash
== hash
&& ((k
= e
.key
) == key
|| (key
!= null
&& key
.equals(k
))))
break;
p
= e
;
}
}
if (e
!= null
) {
V oldValue
= e
.value
;
if (!onlyIfAbsent
|| oldValue
== null
)
e
.value
= value
;
afterNodeAccess(e
);
return oldValue
;
}
}
++modCount
;
if (++size
> threshold
) {
resize();
System
.out
.println(key
+ "扩容");
}
afterNodeInsertion(evict
);
return null
;
}
这个方法里为当有一个键值对要存储的时候,需要确定存储在哪个链表,然后链表中判断是否存在这个key,替换新增扩容等方式,关键代码注释都已经标注。
2.扩容方法
final Node
<K, V>[] resize() {
Node
<K, V>[] oldTab
= table
;
int oldCap
= (oldTab
== null
) ? 0 : oldTab
.length
;
int oldThr
= threshold
;
int newCap
, newThr
= 0;
if (oldCap
> 0) {
if (oldCap
>= MAXIMUM_CAPACITY
) {
threshold
= Integer
.MAX_VALUE
;
return oldTab
;
} else if ((newCap
= oldCap
<< 1) < MAXIMUM_CAPACITY
&&
oldCap
>= DEFAULT_INITIAL_CAPACITY
)
newThr
= oldThr
<< 1;
} else if (oldThr
> 0)
newCap
= oldThr
;
else {
newCap
= DEFAULT_INITIAL_CAPACITY
;
newThr
= (int) (DEFAULT_LOAD_FACTOR
* DEFAULT_INITIAL_CAPACITY
);
}
if (newThr
== 0) {
float ft
= (float) newCap
* loadFactor
;
newThr
= (newCap
< MAXIMUM_CAPACITY
&& ft
< (float) MAXIMUM_CAPACITY
?
(int) ft
: Integer
.MAX_VALUE
);
}
threshold
= newThr
;
@SuppressWarnings({"rawtypes", "unchecked"})
Node
<K, V>[] newTab
= (Node
<K, V>[]) new Node[newCap
];
table
= newTab
;
if (oldTab
!= null
) {
for (int j
= 0; j
< oldCap
; ++j
) {
Node
<K, V> e
;
if ((e
= oldTab
[j
]) != null
) {
oldTab
[j
] = null
;
if (e
.next
== null
)
newTab
[e
.hash
& (newCap
- 1)] = e
;
else if (e
instanceof TreeNode)
((TreeNode
<K, V>) e
).split(this, newTab
, j
, oldCap
);
else {
Node
<K, V> loHead
= null
, loTail
= null
;
Node
<K, V> hiHead
= null
, hiTail
= null
;
Node
<K, V> next
;
do {
next
= e
.next
;
if ((e
.hash
& oldCap
) == 0) {
if (loTail
== null
)
loHead
= e
;
else
loTail
.next
= e
;
loTail
= e
;
} else {
if (hiTail
== null
)
hiHead
= e
;
else
hiTail
.next
= e
;
hiTail
= e
;
}
} while ((e
= next
) != null
);
if (loTail
!= null
) {
loTail
.next
= null
;
newTab
[j
] = loHead
;
}
if (hiTail
!= null
) {
hiTail
.next
= null
;
newTab
[j
+ oldCap
] = hiHead
;
}
}
}
}
}
return newTab
;
}
这个方法的原理比较简单,概括地说就是创建一个新的数组,把原有的map中的所有key值的hash值进行重新计算后写入到另外一个数组中去 修改形参指向地址就可以了。
3.get方法
final MyHashMap
.Node
<K, V> getNode(int hash
, Object key
) {
MyHashMap
.Node
<K, V>[] tab
;
MyHashMap
.Node
<K, V> first
, e
;
int n
;
K k
;
if ((tab
= table
) != null
&& (n
= tab
.length
) > 0 && (first
= tab
[(n
- 1) & hash
]) != null
) {
if (first
.hash
== hash
&&
((k
= first
.key
) == key
|| (key
!= null
&& key
.equals(k
))))
return first
;
if ((e
= first
.next
) != null
) {
if (first
instanceof TreeNode)
return ((TreeNode
<K, V>) first
).getTreeNode(hash
, key
);
do {
if (e
.hash
== hash
&&
((k
= e
.key
) == key
|| (key
!= null
&& key
.equals(k
))))
return e
;
} while ((e
= e
.next
) != null
);
}
}
return null
;
}
get方法也比较简单,也只是根据key计算出hash值判断后到相应的链表去遍历链表的内容后取出相应的值。
4.转换红黑树方法
final void treeifyBin(Node
<K, V>[] tab
, int hash
) {}
因为红黑树的触发条件需要链表长度达到8,手动改小了看不出效果,而要找8个都在统一链表的元素又比较困难所以不好使用实际数据展示效果,而且转换红黑树的方法并非hashmap中特有的,可以说是直接引用了红黑树数据结构的算法,建议查看红黑树数据结构及算法。
结果验证
1.当桶的数量小于64
public static void main(String
[] args
) {
MyHashMap map
= new MyHashMap(4);
map
.put("a1", "a1");
map
.put("a2", "a2");
map
.put("a", "a1");
map
.put(97, "a2");
map
.put("a", "a3");
map
.put(97, "a2");
map
.put("a", "a3");
System
.out
.println("数组长度:" + map
.getSize());
System
.out
.println("key的数量:" + map
.size());
System
.out
.println("map的链表展示:");
map
.print();
}
运行结果
数组长度:
8
key的数量:
4
map的链表展示:
第
0个链表:a1
第
1个链表:a2 a
97
第
2个链表:
第
3个链表:
第
4个链表:
第
5个链表:
第
6个链表:
第
7个链表:
可以看到桶的数量设置初始化为4,当有四个key时,这个时候只占用了两个链表没达到4的0.75倍,但是还是扩容了。
2.当桶的数量大于64
public static void main(String
[] args
) {
MyHashMap map
= new MyHashMap(128);
for (int a
= 0; a
< 94; a
++) {
map
.put(a
, a
);
}
map
.put("a97", "a2");
map
.put("a", "a1");
map
.put(97, "a2");
System
.out
.println("map的链表展示:");
map
.print();
System
.out
.println("数组长度:" + map
.getSize());
System
.out
.println("key的数量:" + map
.size());
}
map的链表展示:
第
0个链表:
0
第
1个链表:
1
第
2个链表:
2
第
3个链表:
3
第
4个链表:
4
第
5个链表:
5
第
6个链表:
6
第
7个链表:
7
第
8个链表:
8
第
9个链表:
9
第
10个链表:
10
第
11个链表:
11
第
12个链表:
12
第
13个链表:
13
第
14个链表:
14
第
15个链表:
15
第
16个链表:
16
第
17个链表:
17
第
18个链表:
18
第
19个链表:
19
第
20个链表:
20
第
21个链表:
21
第
22个链表:
22
第
23个链表:
23
第
24个链表:
24
第
25个链表:
25
第
26个链表:
26
第
27个链表:
27
第
28个链表:
28
第
29个链表:
29
第
30个链表:
30
第
31个链表:
31
第
32个链表:
32
第
33个链表:
33
第
34个链表:
34
第
35个链表:
35
第
36个链表:
36
第
37个链表:
37
第
38个链表:
38
第
39个链表:
39
第
40个链表:
40
第
41个链表:
41
第
42个链表:
42
第
43个链表:
43
第
44个链表:
44
第
45个链表:
45
第
46个链表:
46
第
47个链表:
47
第
48个链表:
48
第
49个链表:
49
第
50个链表:
50
第
51个链表:
51
第
52个链表:
52
第
53个链表:
53
第
54个链表:
54
第
55个链表:
55
第
56个链表:
56
第
57个链表:
57
第
58个链表:
58
第
59个链表:
59
第
60个链表:
60
第
61个链表:
61
第
62个链表:
62 a97
第
63个链表:
63
第
64个链表:
64
第
65个链表:
65
第
66个链表:
66
第
67个链表:
67
第
68个链表:
68
第
69个链表:
69
第
70个链表:
70
第
71个链表:
71
第
72个链表:
72
第
73个链表:
73
第
74个链表:
74
第
75个链表:
75
第
76个链表:
76
第
77个链表:
77
第
78个链表:
78
第
79个链表:
79
第
80个链表:
80
第
81个链表:
81
第
82个链表:
82
第
83个链表:
83
第
84个链表:
84
第
85个链表:
85
第
86个链表:
86
第
87个链表:
87
第
88个链表:
88
第
89个链表:
89
第
90个链表:
90
第
91个链表:
91
第
92个链表:
92
第
93个链表:
93
第
94个链表:
第
95个链表:
第
96个链表:
第
97个链表:a
97
第
98个链表:
第
99个链表:
第
100个链表:
第
101个链表:
第
102个链表:
第
103个链表:
第
104个链表:
第
105个链表:
第
106个链表:
第
107个链表:
第
108个链表:
第
109个链表:
第
110个链表:
第
111个链表:
第
112个链表:
第
113个链表:
第
114个链表:
第
115个链表:
第
116个链表:
第
117个链表:
第
118个链表:
第
119个链表:
第
120个链表:
第
121个链表:
第
122个链表:
第
123个链表:
第
124个链表:
第
125个链表:
第
126个链表:
第
127个链表:
第
128个链表:
第
129个链表:
第
130个链表:
第
131个链表:
第
132个链表:
第
133个链表:
第
134个链表:
第
135个链表:
第
136个链表:
第
137个链表:
第
138个链表:
第
139个链表:
第
140个链表:
第
141个链表:
第
142个链表:
第
143个链表:
第
144个链表:
第
145个链表:
第
146个链表:
第
147个链表:
第
148个链表:
第
149个链表:
第
150个链表:
第
151个链表:
第
152个链表:
第
153个链表:
第
154个链表:
第
155个链表:
第
156个链表:
第
157个链表:
第
158个链表:
第
159个链表:
第
160个链表:
第
161个链表:
第
162个链表:
第
163个链表:
第
164个链表:
第
165个链表:
第
166个链表:
第
167个链表:
第
168个链表:
第
169个链表:
第
170个链表:
第
171个链表:
第
172个链表:
第
173个链表:
第
174个链表:
第
175个链表:
第
176个链表:
第
177个链表:
第
178个链表:
第
179个链表:
第
180个链表:
第
181个链表:
第
182个链表:
第
183个链表:
第
184个链表:
第
185个链表:
第
186个链表:
第
187个链表:
第
188个链表:
第
189个链表:
第
190个链表:
第
191个链表:
第
192个链表:
第
193个链表:
第
194个链表:
第
195个链表:
第
196个链表:
第
197个链表:
第
198个链表:
第
199个链表:
第
200个链表:
第
201个链表:
第
202个链表:
第
203个链表:
第
204个链表:
第
205个链表:
第
206个链表:
第
207个链表:
第
208个链表:
第
209个链表:
第
210个链表:
第
211个链表:
第
212个链表:
第
213个链表:
第
214个链表:
第
215个链表:
第
216个链表:
第
217个链表:
第
218个链表:
第
219个链表:
第
220个链表:
第
221个链表:
第
222个链表:
第
223个链表:
第
224个链表:
第
225个链表:
第
226个链表:
第
227个链表:
第
228个链表:
第
229个链表:
第
230个链表:
第
231个链表:
第
232个链表:
第
233个链表:
第
234个链表:
第
235个链表:
第
236个链表:
第
237个链表:
第
238个链表:
第
239个链表:
第
240个链表:
第
241个链表:
第
242个链表:
第
243个链表:
第
244个链表:
第
245个链表:
第
246个链表:
第
247个链表:
第
248个链表:
第
249个链表:
第
250个链表:
第
251个链表:
第
252个链表:
第
253个链表:
第
254个链表:
第
255个链表:
数组长度:
256
key的数量:
97
从运行结果可以看到这里设置初始长度为128,共计录入了97个key,但是序号64和序号97的链表中都有两个元素,因此被占用的桶只有95个小于128*0.75,但是还是扩容了。
结论:
探索结果:
从测试代码及最后的结果我们可以看到无论桶的数量大于64还是小于64,他的扩容因素都是取决于key的数量,而不是桶的占用树。
1. hashmap的数组长度是根据key的数量决定的,而不是根据已经占用的数组长度决定的,这样就有可能会存在很多数组长度远大于key数量的情况,感觉不是很合理,在这种情况下的话就类似于一个数组,不过索引是根据hash值确当的,不允许自己指定,有点在于删除不会修改其他索引数据的位置,效率会更快一些,不过感觉数组长度拉这么长不是很合理,有一点浪费空间,但是当key数量足够大的时候可以尽可能减少hash碰撞次数,各有利弊。
2.可以看到桶数量小于64时上面的第二个链表存入顺序为a2,a,7,a,97,a,但是到最后链表的第一个元素永远是存入的第一个元素a2,所以新增的元素是依此存在链表尾部的,据说jdk1.7之前时放在头部,因为没有1.7了就没去验证。
注:
此结果为本人测试结果,如果不当之处请大家指正,欢迎大家亲自测试参与讨论。
最后建议大家自己把代码复制到开发工具中运行一下,想要探索哪里就对哪里加日志,这种方式相对于看源码亲手操作过后更容易理解记忆,相对于debug模式有历史记录信息可以查看也比较明了,比较推荐大家使用这种方法。