From 77fc7b452c84c50b89731da2ae88335fa2e336d4 Mon Sep 17 00:00:00 2001 From: Tad Date: Tue, 31 Oct 2017 13:24:35 -0400 Subject: [PATCH] Fix empty CVE patches --- Patches/Linux_CVEs/CVE-2014-8709/ANY/0.patch | 53 + Patches/Linux_CVEs/CVE-2014-9914/ANY/0.patch | 178 ++++ Patches/Linux_CVEs/CVE-2015-3288/ANY/0.patch | 76 ++ Patches/Linux_CVEs/CVE-2015-4170/ANY/0.patch | 121 +++ Patches/Linux_CVEs/CVE-2015-5706/ANY/0.patch | 39 + Patches/Linux_CVEs/CVE-2015-8964/ANY/0.patch | 78 ++ Patches/Linux_CVEs/CVE-2016-0806/3.10/0.patch | 5 + Patches/Linux_CVEs/CVE-2016-0806/3.4/1.patch | 5 + Patches/Linux_CVEs/CVE-2016-10088/ANY/0.patch | 48 + Patches/Linux_CVEs/CVE-2016-1583/ANY/0.patch | 57 ++ Patches/Linux_CVEs/CVE-2016-3137/ANY/0.patch | 49 + Patches/Linux_CVEs/CVE-2016-4794/ANY/0.patch | 107 ++ Patches/Linux_CVEs/CVE-2016-4794/ANY/1.patch | 156 +++ Patches/Linux_CVEs/CVE-2016-7916/ANY/0.patch | 56 ++ Patches/Linux_CVEs/CVE-2016-8650/ANY/0.patch | 100 ++ Patches/Linux_CVEs/CVE-2016-9191/ANY/0.patch | 87 ++ Patches/Linux_CVEs/CVE-2016-9754/ANY/0.patch | 89 ++ Patches/Linux_CVEs/CVE-2016-9793/ANY/0.patch | 49 + Patches/Linux_CVEs/CVE-2016-9794/ANY/0.patch | 44 + Patches/Linux_CVEs/CVE-2016-9806/ANY/0.patch | 50 + Patches/Linux_CVEs/CVE-2017-5967/ANY/0.patch | 939 ++++++++++++++++++ Patches/Linux_CVEs/CVE-2017-5986/ANY/0.patch | 39 + Patches/Linux_CVEs/CVE-2017-6001/3.4/0.patch | 159 +++ Patches/Linux_CVEs/CVE-2017-6345/ANY/0.patch | 58 ++ Patches/Linux_CVEs/CVE-2017-6346/ANY/0.patch | 126 +++ Patches/Linux_CVEs/CVE-2017-7187/ANY/0.patch | 2 +- Patches/Linux_CVEs/CVE-2017-9075/ANY/0.patch | 33 + Patches/Linux_CVEs/CVE-2017-9076/ANY/0.patch | 64 ++ Patches/Linux_CVEs/Fix.sh | 14 + Scripts/LineageOS-14.1/00init.sh | 1 + Scripts/LineageOS-14.1/Optimize.sh | 2 +- Scripts/LineageOS-14.1/Patch.sh | 4 +- Scripts/LineageOS-14.1/Patch_CVE.sh | 2 +- Scripts/LineageOS-14.1/Rebrand.sh | 2 +- Scripts/LineageOS-14.1/Theme.sh | 2 +- 35 files changed, 2887 insertions(+), 7 deletions(-) create mode 100644 Patches/Linux_CVEs/Fix.sh diff --git a/Patches/Linux_CVEs/CVE-2014-8709/ANY/0.patch b/Patches/Linux_CVEs/CVE-2014-8709/ANY/0.patch index e69de29b..25f6dfb3 100644 --- a/Patches/Linux_CVEs/CVE-2014-8709/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2014-8709/ANY/0.patch @@ -0,0 +1,53 @@ +From 338f977f4eb441e69bb9a46eaa0ac715c931a67f Mon Sep 17 00:00:00 2001 +From: Johannes Berg +Date: Sat, 1 Feb 2014 00:16:23 +0100 +Subject: mac80211: fix fragmentation code, particularly for encryption + +The "new" fragmentation code (since my rewrite almost 5 years ago) +erroneously sets skb->len rather than using skb_trim() to adjust +the length of the first fragment after copying out all the others. +This leaves the skb tail pointer pointing to after where the data +originally ended, and thus causes the encryption MIC to be written +at that point, rather than where it belongs: immediately after the +data. + +The impact of this is that if software encryption is done, then + a) encryption doesn't work for the first fragment, the connection + becomes unusable as the first fragment will never be properly + verified at the receiver, the MIC is practically guaranteed to + be wrong + b) we leak up to 8 bytes of plaintext (!) of the packet out into + the air + +This is only mitigated by the fact that many devices are capable +of doing encryption in hardware, in which case this can't happen +as the tail pointer is irrelevant in that case. Additionally, +fragmentation is not used very frequently and would normally have +to be configured manually. + +Fix this by using skb_trim() properly. + +Cc: stable@vger.kernel.org +Fixes: 2de8e0d999b8 ("mac80211: rewrite fragmentation") +Reported-by: Jouni Malinen +Signed-off-by: Johannes Berg +--- + net/mac80211/tx.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c +index 27c990b..97a02d3 100644 +--- a/net/mac80211/tx.c ++++ b/net/mac80211/tx.c +@@ -878,7 +878,7 @@ static int ieee80211_fragment(struct ieee80211_tx_data *tx, + } + + /* adjust first fragment's length */ +- skb->len = hdrlen + per_fragm; ++ skb_trim(skb, hdrlen + per_fragm); + return 0; + } + +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2014-9914/ANY/0.patch b/Patches/Linux_CVEs/CVE-2014-9914/ANY/0.patch index e69de29b..a630feea 100644 --- a/Patches/Linux_CVEs/CVE-2014-9914/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2014-9914/ANY/0.patch @@ -0,0 +1,178 @@ +From 9709674e68646cee5a24e3000b3558d25412203a Mon Sep 17 00:00:00 2001 +From: Eric Dumazet +Date: Tue, 10 Jun 2014 06:43:01 -0700 +Subject: ipv4: fix a race in ip4_datagram_release_cb() + +Alexey gave a AddressSanitizer[1] report that finally gave a good hint +at where was the origin of various problems already reported by Dormando +in the past [2] + +Problem comes from the fact that UDP can have a lockless TX path, and +concurrent threads can manipulate sk_dst_cache, while another thread, +is holding socket lock and calls __sk_dst_set() in +ip4_datagram_release_cb() (this was added in linux-3.8) + +It seems that all we need to do is to use sk_dst_check() and +sk_dst_set() so that all the writers hold same spinlock +(sk->sk_dst_lock) to prevent corruptions. + +TCP stack do not need this protection, as all sk_dst_cache writers hold +the socket lock. + +[1] +https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel + +AddressSanitizer: heap-use-after-free in ipv4_dst_check +Read of size 2 by thread T15453: + [] ipv4_dst_check+0x1a/0x90 ./net/ipv4/route.c:1116 + [] __sk_dst_check+0x89/0xe0 ./net/core/sock.c:531 + [] ip4_datagram_release_cb+0x46/0x390 ??:0 + [] release_sock+0x17a/0x230 ./net/core/sock.c:2413 + [] ip4_datagram_connect+0x462/0x5d0 ??:0 + [] inet_dgram_connect+0x76/0xd0 ./net/ipv4/af_inet.c:534 + [] SYSC_connect+0x15c/0x1c0 ./net/socket.c:1701 + [] SyS_connect+0xe/0x10 ./net/socket.c:1682 + [] system_call_fastpath+0x16/0x1b +./arch/x86/kernel/entry_64.S:629 + +Freed by thread T15455: + [] dst_destroy+0xa8/0x160 ./net/core/dst.c:251 + [] dst_release+0x45/0x80 ./net/core/dst.c:280 + [] ip4_datagram_connect+0xa1/0x5d0 ??:0 + [] inet_dgram_connect+0x76/0xd0 ./net/ipv4/af_inet.c:534 + [] SYSC_connect+0x15c/0x1c0 ./net/socket.c:1701 + [] SyS_connect+0xe/0x10 ./net/socket.c:1682 + [] system_call_fastpath+0x16/0x1b +./arch/x86/kernel/entry_64.S:629 + +Allocated by thread T15453: + [] dst_alloc+0x81/0x2b0 ./net/core/dst.c:171 + [] rt_dst_alloc+0x47/0x50 ./net/ipv4/route.c:1406 + [< inlined >] __ip_route_output_key+0x3e8/0xf70 +__mkroute_output ./net/ipv4/route.c:1939 + [] __ip_route_output_key+0x3e8/0xf70 ./net/ipv4/route.c:2161 + [] ip_route_output_flow+0x14/0x30 ./net/ipv4/route.c:2249 + [] ip4_datagram_connect+0x317/0x5d0 ??:0 + [] inet_dgram_connect+0x76/0xd0 ./net/ipv4/af_inet.c:534 + [] SYSC_connect+0x15c/0x1c0 ./net/socket.c:1701 + [] SyS_connect+0xe/0x10 ./net/socket.c:1682 + [] system_call_fastpath+0x16/0x1b +./arch/x86/kernel/entry_64.S:629 + +[2] +<4>[196727.311203] general protection fault: 0000 [#1] SMP +<4>[196727.311224] Modules linked in: xt_TEE xt_dscp xt_DSCP macvlan bridge coretemp crc32_pclmul ghash_clmulni_intel gpio_ich microcode ipmi_watchdog ipmi_devintf sb_edac edac_core lpc_ich mfd_core tpm_tis tpm tpm_bios ipmi_si ipmi_msghandler isci igb libsas i2c_algo_bit ixgbe ptp pps_core mdio +<4>[196727.311333] CPU: 17 PID: 0 Comm: swapper/17 Not tainted 3.10.26 #1 +<4>[196727.311344] Hardware name: Supermicro X9DRi-LN4+/X9DR3-LN4+/X9DRi-LN4+/X9DR3-LN4+, BIOS 3.0 07/05/2013 +<4>[196727.311364] task: ffff885e6f069700 ti: ffff885e6f072000 task.ti: ffff885e6f072000 +<4>[196727.311377] RIP: 0010:[] [] ipv4_dst_destroy+0x4f/0x80 +<4>[196727.311399] RSP: 0018:ffff885effd23a70 EFLAGS: 00010282 +<4>[196727.311409] RAX: dead000000200200 RBX: ffff8854c398ecc0 RCX: 0000000000000040 +<4>[196727.311423] RDX: dead000000100100 RSI: dead000000100100 RDI: dead000000200200 +<4>[196727.311437] RBP: ffff885effd23a80 R08: ffffffff815fd9e0 R09: ffff885d5a590800 +<4>[196727.311451] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 +<4>[196727.311464] R13: ffffffff81c8c280 R14: 0000000000000000 R15: ffff880e85ee16ce +<4>[196727.311510] FS: 0000000000000000(0000) GS:ffff885effd20000(0000) knlGS:0000000000000000 +<4>[196727.311554] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 +<4>[196727.311581] CR2: 00007a46751eb000 CR3: 0000005e65688000 CR4: 00000000000407e0 +<4>[196727.311625] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 +<4>[196727.311669] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 +<4>[196727.311713] Stack: +<4>[196727.311733] ffff8854c398ecc0 ffff8854c398ecc0 ffff885effd23ab0 ffffffff815b7f42 +<4>[196727.311784] ffff88be6595bc00 ffff8854c398ecc0 0000000000000000 ffff8854c398ecc0 +<4>[196727.311834] ffff885effd23ad0 ffffffff815b86c6 ffff885d5a590800 ffff8816827821c0 +<4>[196727.311885] Call Trace: +<4>[196727.311907] +<4>[196727.311912] [] dst_destroy+0x32/0xe0 +<4>[196727.311959] [] dst_release+0x56/0x80 +<4>[196727.311986] [] tcp_v4_do_rcv+0x2a5/0x4a0 +<4>[196727.312013] [] tcp_v4_rcv+0x7da/0x820 +<4>[196727.312041] [] ? ip_rcv_finish+0x360/0x360 +<4>[196727.312070] [] ? nf_hook_slow+0x7d/0x150 +<4>[196727.312097] [] ? ip_rcv_finish+0x360/0x360 +<4>[196727.312125] [] ip_local_deliver_finish+0xb2/0x230 +<4>[196727.312154] [] ip_local_deliver+0x4a/0x90 +<4>[196727.312183] [] ip_rcv_finish+0x119/0x360 +<4>[196727.312212] [] ip_rcv+0x22b/0x340 +<4>[196727.312242] [] ? macvlan_broadcast+0x160/0x160 [macvlan] +<4>[196727.312275] [] __netif_receive_skb_core+0x512/0x640 +<4>[196727.312308] [] ? kmem_cache_alloc+0x13b/0x150 +<4>[196727.312338] [] __netif_receive_skb+0x21/0x70 +<4>[196727.312368] [] netif_receive_skb+0x31/0xa0 +<4>[196727.312397] [] napi_gro_receive+0xe8/0x140 +<4>[196727.312433] [] ixgbe_poll+0x551/0x11f0 [ixgbe] +<4>[196727.312463] [] ? ip_rcv+0x22b/0x340 +<4>[196727.312491] [] net_rx_action+0x111/0x210 +<4>[196727.312521] [] ? __netif_receive_skb+0x21/0x70 +<4>[196727.312552] [] __do_softirq+0xd0/0x270 +<4>[196727.312583] [] call_softirq+0x1c/0x30 +<4>[196727.312613] [] do_softirq+0x55/0x90 +<4>[196727.312640] [] irq_exit+0x55/0x60 +<4>[196727.312668] [] do_IRQ+0x63/0xe0 +<4>[196727.312696] [] common_interrupt+0x6a/0x6a +<4>[196727.312722] +<1>[196727.313071] RIP [] ipv4_dst_destroy+0x4f/0x80 +<4>[196727.313100] RSP +<4>[196727.313377] ---[ end trace 64b3f14fae0f2e29 ]--- +<0>[196727.380908] Kernel panic - not syncing: Fatal exception in interrupt + +Reported-by: Alexey Preobrazhensky +Reported-by: dormando +Signed-off-by: Eric Dumazet +Fixes: 8141ed9fcedb2 ("ipv4: Add a socket release callback for datagram sockets") +Cc: Steffen Klassert +Signed-off-by: David S. Miller +--- + net/ipv4/datagram.c | 20 +++++++++++++++----- + 1 file changed, 15 insertions(+), 5 deletions(-) + +diff --git a/net/ipv4/datagram.c b/net/ipv4/datagram.c +index 8b5134c..a3095fd 100644 +--- a/net/ipv4/datagram.c ++++ b/net/ipv4/datagram.c +@@ -86,18 +86,26 @@ out: + } + EXPORT_SYMBOL(ip4_datagram_connect); + ++/* Because UDP xmit path can manipulate sk_dst_cache without holding ++ * socket lock, we need to use sk_dst_set() here, ++ * even if we own the socket lock. ++ */ + void ip4_datagram_release_cb(struct sock *sk) + { + const struct inet_sock *inet = inet_sk(sk); + const struct ip_options_rcu *inet_opt; + __be32 daddr = inet->inet_daddr; ++ struct dst_entry *dst; + struct flowi4 fl4; + struct rtable *rt; + +- if (! __sk_dst_get(sk) || __sk_dst_check(sk, 0)) +- return; +- + rcu_read_lock(); ++ ++ dst = __sk_dst_get(sk); ++ if (!dst || !dst->obsolete || dst->ops->check(dst, 0)) { ++ rcu_read_unlock(); ++ return; ++ } + inet_opt = rcu_dereference(inet->inet_opt); + if (inet_opt && inet_opt->opt.srr) + daddr = inet_opt->opt.faddr; +@@ -105,8 +113,10 @@ void ip4_datagram_release_cb(struct sock *sk) + inet->inet_saddr, inet->inet_dport, + inet->inet_sport, sk->sk_protocol, + RT_CONN_FLAGS(sk), sk->sk_bound_dev_if); +- if (!IS_ERR(rt)) +- __sk_dst_set(sk, &rt->dst); ++ ++ dst = !IS_ERR(rt) ? &rt->dst : NULL; ++ sk_dst_set(sk, dst); ++ + rcu_read_unlock(); + } + EXPORT_SYMBOL_GPL(ip4_datagram_release_cb); +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2015-3288/ANY/0.patch b/Patches/Linux_CVEs/CVE-2015-3288/ANY/0.patch index e69de29b..a140407e 100644 --- a/Patches/Linux_CVEs/CVE-2015-3288/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2015-3288/ANY/0.patch @@ -0,0 +1,76 @@ +From 6b7339f4c31ad69c8e9c0b2859276e22cf72176d Mon Sep 17 00:00:00 2001 +From: "Kirill A. Shutemov" +Date: Mon, 6 Jul 2015 23:18:37 +0300 +Subject: mm: avoid setting up anonymous pages into file mapping + +Reading page fault handler code I've noticed that under right +circumstances kernel would map anonymous pages into file mappings: if +the VMA doesn't have vm_ops->fault() and the VMA wasn't fully populated +on ->mmap(), kernel would handle page fault to not populated pte with +do_anonymous_page(). + +Let's change page fault handler to use do_anonymous_page() only on +anonymous VMA (->vm_ops == NULL) and make sure that the VMA is not +shared. + +For file mappings without vm_ops->fault() or shred VMA without vm_ops, +page fault on pte_none() entry would lead to SIGBUS. + +Signed-off-by: Kirill A. Shutemov +Acked-by: Oleg Nesterov +Cc: Andrew Morton +Cc: Willy Tarreau +Cc: stable@vger.kernel.org +Signed-off-by: Linus Torvalds +--- + mm/memory.c | 20 +++++++++++++------- + 1 file changed, 13 insertions(+), 7 deletions(-) + +diff --git a/mm/memory.c b/mm/memory.c +index a84fbb7..388dcf9 100644 +--- a/mm/memory.c ++++ b/mm/memory.c +@@ -2670,6 +2670,10 @@ static int do_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma, + + pte_unmap(page_table); + ++ /* File mapping without ->vm_ops ? */ ++ if (vma->vm_flags & VM_SHARED) ++ return VM_FAULT_SIGBUS; ++ + /* Check if we need to add a guard page to the stack */ + if (check_stack_guard_page(vma, address) < 0) + return VM_FAULT_SIGSEGV; +@@ -3099,6 +3103,9 @@ static int do_fault(struct mm_struct *mm, struct vm_area_struct *vma, + - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; + + pte_unmap(page_table); ++ /* The VMA was not fully populated on mmap() or missing VM_DONTEXPAND */ ++ if (!vma->vm_ops->fault) ++ return VM_FAULT_SIGBUS; + if (!(flags & FAULT_FLAG_WRITE)) + return do_read_fault(mm, vma, address, pmd, pgoff, flags, + orig_pte); +@@ -3244,13 +3251,12 @@ static int handle_pte_fault(struct mm_struct *mm, + barrier(); + if (!pte_present(entry)) { + if (pte_none(entry)) { +- if (vma->vm_ops) { +- if (likely(vma->vm_ops->fault)) +- return do_fault(mm, vma, address, pte, +- pmd, flags, entry); +- } +- return do_anonymous_page(mm, vma, address, +- pte, pmd, flags); ++ if (vma->vm_ops) ++ return do_fault(mm, vma, address, pte, pmd, ++ flags, entry); ++ ++ return do_anonymous_page(mm, vma, address, pte, pmd, ++ flags); + } + return do_swap_page(mm, vma, address, + pte, pmd, flags, entry); +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2015-4170/ANY/0.patch b/Patches/Linux_CVEs/CVE-2015-4170/ANY/0.patch index e69de29b..eced29e8 100644 --- a/Patches/Linux_CVEs/CVE-2015-4170/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2015-4170/ANY/0.patch @@ -0,0 +1,121 @@ +From cf872776fc84128bb779ce2b83a37c884c3203ae Mon Sep 17 00:00:00 2001 +From: Peter Hurley +Date: Wed, 11 Dec 2013 21:11:58 -0500 +Subject: tty: Fix hang at ldsem_down_read() + +When a controlling tty is being hung up and the hang up is +waiting for a just-signalled tty reader or writer to exit, and a new tty +reader/writer tries to acquire an ldisc reference concurrently with the +ldisc reference release from the signalled reader/writer, the hangup +can hang. The new reader/writer is sleeping in ldsem_down_read() and the +hangup is sleeping in ldsem_down_write() [1]. + +The new reader/writer fails to wakeup the waiting hangup because the +wrong lock count value is checked (the old lock count rather than the new +lock count) to see if the lock is unowned. + +Change helper function to return the new lock count if the cmpxchg was +successful; document this behavior. + +[1] edited dmesg log from reporter + +SysRq : Show Blocked State + task PC stack pid father +systemd D ffff88040c4f0000 0 1 0 0x00000000 + ffff88040c49fbe0 0000000000000046 ffff88040c4a0000 ffff88040c49ffd8 + 00000000001d3980 00000000001d3980 ffff88040c4a0000 ffff88040593d840 + ffff88040c49fb40 ffffffff810a4cc0 0000000000000006 0000000000000023 +Call Trace: + [] ? sched_clock_cpu+0x9f/0xe4 + [] ? sched_clock_cpu+0x9f/0xe4 + [] ? sched_clock_cpu+0x9f/0xe4 + [] ? sched_clock_cpu+0x9f/0xe4 + [] schedule+0x24/0x5e + [] schedule_timeout+0x15b/0x1ec + [] ? sched_clock_cpu+0x9f/0xe4 + [] ? _raw_spin_unlock_irq+0x24/0x26 + [] down_read_failed+0xe3/0x1b9 + [] ldsem_down_read+0x8b/0xa5 + [] ? tty_ldisc_ref_wait+0x1b/0x44 + [] tty_ldisc_ref_wait+0x1b/0x44 + [] tty_write+0x7d/0x28a + [] redirected_tty_write+0x8d/0x98 + [] ? tty_write+0x28a/0x28a + [] do_loop_readv_writev+0x56/0x79 + [] do_readv_writev+0x1b0/0x1ff + [] ? do_vfs_ioctl+0x32a/0x489 + [] ? final_putname+0x1d/0x3a + [] vfs_writev+0x2e/0x49 + [] SyS_writev+0x47/0xaa + [] system_call_fastpath+0x16/0x1b +bash D ffffffff81c104c0 0 5469 5302 0x00000082 + ffff8800cf817ac0 0000000000000046 ffff8804086b22a0 ffff8800cf817fd8 + 00000000001d3980 00000000001d3980 ffff8804086b22a0 ffff8800cf817a48 + 000000000000b9a0 ffff8800cf817a78 ffffffff81004675 ffff8800cf817a44 +Call Trace: + [] ? dump_trace+0x165/0x29c + [] ? sched_clock_cpu+0x9f/0xe4 + [] ? save_stack_trace+0x26/0x41 + [] schedule+0x24/0x5e + [] schedule_timeout+0x15b/0x1ec + [] ? sched_clock_cpu+0x9f/0xe4 + [] ? down_write_failed+0xa3/0x1c9 + [] ? _raw_spin_unlock_irq+0x24/0x26 + [] down_write_failed+0xab/0x1c9 + [] ldsem_down_write+0x79/0xb1 + [] ? tty_ldisc_lock_pair_timeout+0xa5/0xd9 + [] tty_ldisc_lock_pair_timeout+0xa5/0xd9 + [] tty_ldisc_hangup+0xc4/0x218 + [] __tty_hangup+0x2e2/0x3ed + [] disassociate_ctty+0x63/0x226 + [] do_exit+0x79f/0xa11 + [] ? get_signal_to_deliver+0x206/0x62f + [] ? lock_release_holdtime.part.8+0xf/0x16e + [] do_group_exit+0x47/0xb5 + [] get_signal_to_deliver+0x241/0x62f + [] do_signal+0x43/0x59d + [] ? __audit_syscall_exit+0x21a/0x2a8 + [] ? lock_release_holdtime.part.8+0xf/0x16e + [] do_notify_resume+0x54/0x6c + [] int_signal+0x12/0x17 + +Reported-by: Sami Farin +Cc: # 3.12.x +Signed-off-by: Peter Hurley +Signed-off-by: Greg Kroah-Hartman +--- + drivers/tty/tty_ldsem.c | 16 +++++++++++++--- + 1 file changed, 13 insertions(+), 3 deletions(-) + +diff --git a/drivers/tty/tty_ldsem.c b/drivers/tty/tty_ldsem.c +index 22fad8a..d8a55e8 100644 +--- a/drivers/tty/tty_ldsem.c ++++ b/drivers/tty/tty_ldsem.c +@@ -86,11 +86,21 @@ static inline long ldsem_atomic_update(long delta, struct ld_semaphore *sem) + return atomic_long_add_return(delta, (atomic_long_t *)&sem->count); + } + ++/* ++ * ldsem_cmpxchg() updates @*old with the last-known sem->count value. ++ * Returns 1 if count was successfully changed; @*old will have @new value. ++ * Returns 0 if count was not changed; @*old will have most recent sem->count ++ */ + static inline int ldsem_cmpxchg(long *old, long new, struct ld_semaphore *sem) + { +- long tmp = *old; +- *old = atomic_long_cmpxchg(&sem->count, *old, new); +- return *old == tmp; ++ long tmp = atomic_long_cmpxchg(&sem->count, *old, new); ++ if (tmp == *old) { ++ *old = new; ++ return 1; ++ } else { ++ *old = tmp; ++ return 0; ++ } + } + + /* +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2015-5706/ANY/0.patch b/Patches/Linux_CVEs/CVE-2015-5706/ANY/0.patch index e69de29b..f63b5535 100644 --- a/Patches/Linux_CVEs/CVE-2015-5706/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2015-5706/ANY/0.patch @@ -0,0 +1,39 @@ +From f15133df088ecadd141ea1907f2c96df67c729f0 Mon Sep 17 00:00:00 2001 +From: Al Viro +Date: Fri, 8 May 2015 22:53:15 -0400 +Subject: path_openat(): fix double fput() + +path_openat() jumps to the wrong place after do_tmpfile() - it has +already done path_cleanup() (as part of path_lookupat() called by +do_tmpfile()), so doing that again can lead to double fput(). + +Cc: stable@vger.kernel.org # v3.11+ +Signed-off-by: Al Viro +--- + fs/namei.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +diff --git a/fs/namei.c b/fs/namei.c +index f67cf6c..fe30d3b 100644 +--- a/fs/namei.c ++++ b/fs/namei.c +@@ -3233,7 +3233,7 @@ static struct file *path_openat(int dfd, struct filename *pathname, + + if (unlikely(file->f_flags & __O_TMPFILE)) { + error = do_tmpfile(dfd, pathname, nd, flags, op, file, &opened); +- goto out; ++ goto out2; + } + + error = path_init(dfd, pathname, flags, nd); +@@ -3263,6 +3263,7 @@ static struct file *path_openat(int dfd, struct filename *pathname, + } + out: + path_cleanup(nd); ++out2: + if (!(opened & FILE_OPENED)) { + BUG_ON(!error); + put_filp(file); +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2015-8964/ANY/0.patch b/Patches/Linux_CVEs/CVE-2015-8964/ANY/0.patch index e69de29b..3da3aa42 100644 --- a/Patches/Linux_CVEs/CVE-2015-8964/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2015-8964/ANY/0.patch @@ -0,0 +1,78 @@ +From dd42bf1197144ede075a9d4793123f7689e164bc Mon Sep 17 00:00:00 2001 +From: Peter Hurley +Date: Fri, 27 Nov 2015 14:30:21 -0500 +Subject: tty: Prevent ldisc drivers from re-using stale tty fields + +Line discipline drivers may mistakenly misuse ldisc-related fields +when initializing. For example, a failure to initialize tty->receive_room +in the N_GIGASET_M101 line discipline was recently found and fixed [1]. +Now, the N_X25 line discipline has been discovered accessing the previous +line discipline's already-freed private data [2]. + +Harden the ldisc interface against misuse by initializing revelant +tty fields before instancing the new line discipline. + +[1] + commit fd98e9419d8d622a4de91f76b306af6aa627aa9c + Author: Tilman Schmidt + Date: Tue Jul 14 00:37:13 2015 +0200 + + isdn/gigaset: reset tty->receive_room when attaching ser_gigaset + +[2] Report from Sasha Levin + [ 634.336761] ================================================================== + [ 634.338226] BUG: KASAN: use-after-free in x25_asy_open_tty+0x13d/0x490 at addr ffff8800a743efd0 + [ 634.339558] Read of size 4 by task syzkaller_execu/8981 + [ 634.340359] ============================================================================= + [ 634.341598] BUG kmalloc-512 (Not tainted): kasan: bad access detected + ... + [ 634.405018] Call Trace: + [ 634.405277] dump_stack (lib/dump_stack.c:52) + [ 634.405775] print_trailer (mm/slub.c:655) + [ 634.406361] object_err (mm/slub.c:662) + [ 634.406824] kasan_report_error (mm/kasan/report.c:138 mm/kasan/report.c:236) + [ 634.409581] __asan_report_load4_noabort (mm/kasan/report.c:279) + [ 634.411355] x25_asy_open_tty (drivers/net/wan/x25_asy.c:559 (discriminator 1)) + [ 634.413997] tty_ldisc_open.isra.2 (drivers/tty/tty_ldisc.c:447) + [ 634.414549] tty_set_ldisc (drivers/tty/tty_ldisc.c:567) + [ 634.415057] tty_ioctl (drivers/tty/tty_io.c:2646 drivers/tty/tty_io.c:2879) + [ 634.423524] do_vfs_ioctl (fs/ioctl.c:43 fs/ioctl.c:607) + [ 634.427491] SyS_ioctl (fs/ioctl.c:622 fs/ioctl.c:613) + [ 634.427945] entry_SYSCALL_64_fastpath (arch/x86/entry/entry_64.S:188) + +Cc: Tilman Schmidt +Cc: Sasha Levin +Signed-off-by: Peter Hurley +Signed-off-by: Greg Kroah-Hartman +--- + drivers/tty/tty_ldisc.c | 7 +++++++ + 1 file changed, 7 insertions(+) + +diff --git a/drivers/tty/tty_ldisc.c b/drivers/tty/tty_ldisc.c +index 9ec1250..a054d03 100644 +--- a/drivers/tty/tty_ldisc.c ++++ b/drivers/tty/tty_ldisc.c +@@ -417,6 +417,10 @@ EXPORT_SYMBOL_GPL(tty_ldisc_flush); + * they are not on hot paths so a little discipline won't do + * any harm. + * ++ * The line discipline-related tty_struct fields are reset to ++ * prevent the ldisc driver from re-using stale information for ++ * the new ldisc instance. ++ * + * Locking: takes termios_rwsem + */ + +@@ -425,6 +429,9 @@ static void tty_set_termios_ldisc(struct tty_struct *tty, int num) + down_write(&tty->termios_rwsem); + tty->termios.c_line = num; + up_write(&tty->termios_rwsem); ++ ++ tty->disc_data = NULL; ++ tty->receive_room = 0; + } + + /** +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2016-0806/3.10/0.patch b/Patches/Linux_CVEs/CVE-2016-0806/3.10/0.patch index 78d88c1a..c2124b2f 100644 --- a/Patches/Linux_CVEs/CVE-2016-0806/3.10/0.patch +++ b/Patches/Linux_CVEs/CVE-2016-0806/3.10/0.patch @@ -25,6 +25,7 @@ + @@ -832,6 +833,7 @@ + @@ -4112,6 +4114,8 @@ + + @@ -4123,6 +4127,7 @@ + diff --git a/Patches/Linux_CVEs/CVE-2016-0806/3.4/1.patch b/Patches/Linux_CVEs/CVE-2016-0806/3.4/1.patch index c8f7f96d..374e10ad 100644 --- a/Patches/Linux_CVEs/CVE-2016-0806/3.4/1.patch +++ b/Patches/Linux_CVEs/CVE-2016-0806/3.4/1.patch @@ -25,6 +25,7 @@ + @@ -832,6 +833,7 @@ + @@ -4112,6 +4114,8 @@ + + @@ -4123,6 +4127,7 @@ + diff --git a/Patches/Linux_CVEs/CVE-2016-10088/ANY/0.patch b/Patches/Linux_CVEs/CVE-2016-10088/ANY/0.patch index e69de29b..695b415d 100644 --- a/Patches/Linux_CVEs/CVE-2016-10088/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2016-10088/ANY/0.patch @@ -0,0 +1,48 @@ +From 128394eff343fc6d2f32172f03e24829539c5835 Mon Sep 17 00:00:00 2001 +From: Al Viro +Date: Fri, 16 Dec 2016 13:42:06 -0500 +Subject: sg_write()/bsg_write() is not fit to be called under KERNEL_DS + +Both damn things interpret userland pointers embedded into the payload; +worse, they are actually traversing those. Leaving aside the bad +API design, this is very much _not_ safe to call with KERNEL_DS. +Bail out early if that happens. + +Cc: stable@vger.kernel.org +Signed-off-by: Al Viro +--- + block/bsg.c | 3 +++ + drivers/scsi/sg.c | 3 +++ + 2 files changed, 6 insertions(+) + +diff --git a/block/bsg.c b/block/bsg.c +index 8a05a40..a57046d 100644 +--- a/block/bsg.c ++++ b/block/bsg.c +@@ -655,6 +655,9 @@ bsg_write(struct file *file, const char __user *buf, size_t count, loff_t *ppos) + + dprintk("%s: write %Zd bytes\n", bd->name, count); + ++ if (unlikely(segment_eq(get_fs(), KERNEL_DS))) ++ return -EINVAL; ++ + bsg_set_block(bd, file); + + bytes_written = 0; +diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c +index 070332e..dbe5b4b 100644 +--- a/drivers/scsi/sg.c ++++ b/drivers/scsi/sg.c +@@ -581,6 +581,9 @@ sg_write(struct file *filp, const char __user *buf, size_t count, loff_t * ppos) + sg_io_hdr_t *hp; + unsigned char cmnd[SG_MAX_CDB_SIZE]; + ++ if (unlikely(segment_eq(get_fs(), KERNEL_DS))) ++ return -EINVAL; ++ + if ((!(sfp = (Sg_fd *) filp->private_data)) || (!(sdp = sfp->parentdp))) + return -ENXIO; + SCSI_LOG_TIMEOUT(3, sg_printk(KERN_INFO, sdp, +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2016-1583/ANY/0.patch b/Patches/Linux_CVEs/CVE-2016-1583/ANY/0.patch index e69de29b..9d7b23e6 100644 --- a/Patches/Linux_CVEs/CVE-2016-1583/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2016-1583/ANY/0.patch @@ -0,0 +1,57 @@ +From f0fe970df3838c202ef6c07a4c2b36838ef0a88b Mon Sep 17 00:00:00 2001 +From: Jeff Mahoney +Date: Tue, 5 Jul 2016 17:32:30 -0400 +Subject: ecryptfs: don't allow mmap when the lower fs doesn't support it + +There are legitimate reasons to disallow mmap on certain files, notably +in sysfs or procfs. We shouldn't emulate mmap support on file systems +that don't offer support natively. + +CVE-2016-1583 + +Signed-off-by: Jeff Mahoney +Cc: stable@vger.kernel.org +[tyhicks: clean up f_op check by using ecryptfs_file_to_lower()] +Signed-off-by: Tyler Hicks +--- + fs/ecryptfs/file.c | 15 ++++++++++++++- + 1 file changed, 14 insertions(+), 1 deletion(-) + +(limited to 'fs/ecryptfs/file.c') + +diff --git a/fs/ecryptfs/file.c b/fs/ecryptfs/file.c +index 53d0141..ca4e837 100644 +--- a/fs/ecryptfs/file.c ++++ b/fs/ecryptfs/file.c +@@ -169,6 +169,19 @@ out: + return rc; + } + ++static int ecryptfs_mmap(struct file *file, struct vm_area_struct *vma) ++{ ++ struct file *lower_file = ecryptfs_file_to_lower(file); ++ /* ++ * Don't allow mmap on top of file systems that don't support it ++ * natively. If FILESYSTEM_MAX_STACK_DEPTH > 2 or ecryptfs ++ * allows recursive mounting, this will need to be extended. ++ */ ++ if (!lower_file->f_op->mmap) ++ return -ENODEV; ++ return generic_file_mmap(file, vma); ++} ++ + /** + * ecryptfs_open + * @inode: inode specifying file to open +@@ -403,7 +416,7 @@ const struct file_operations ecryptfs_main_fops = { + #ifdef CONFIG_COMPAT + .compat_ioctl = ecryptfs_compat_ioctl, + #endif +- .mmap = generic_file_mmap, ++ .mmap = ecryptfs_mmap, + .open = ecryptfs_open, + .flush = ecryptfs_flush, + .release = ecryptfs_release, +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2016-3137/ANY/0.patch b/Patches/Linux_CVEs/CVE-2016-3137/ANY/0.patch index e69de29b..c54514e3 100644 --- a/Patches/Linux_CVEs/CVE-2016-3137/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2016-3137/ANY/0.patch @@ -0,0 +1,49 @@ +From c55aee1bf0e6b6feec8b2927b43f7a09a6d5f754 Mon Sep 17 00:00:00 2001 +From: Oliver Neukum +Date: Thu, 31 Mar 2016 12:04:25 -0400 +Subject: USB: cypress_m8: add endpoint sanity check + +An attack using missing endpoints exists. + +CVE-2016-3137 + +Signed-off-by: Oliver Neukum +CC: stable@vger.kernel.org +Signed-off-by: Johan Hovold +Signed-off-by: Greg Kroah-Hartman +--- + drivers/usb/serial/cypress_m8.c | 11 +++++------ + 1 file changed, 5 insertions(+), 6 deletions(-) + +diff --git a/drivers/usb/serial/cypress_m8.c b/drivers/usb/serial/cypress_m8.c +index b283eb8..bbeeb2b 100644 +--- a/drivers/usb/serial/cypress_m8.c ++++ b/drivers/usb/serial/cypress_m8.c +@@ -447,6 +447,11 @@ static int cypress_generic_port_probe(struct usb_serial_port *port) + struct usb_serial *serial = port->serial; + struct cypress_private *priv; + ++ if (!port->interrupt_out_urb || !port->interrupt_in_urb) { ++ dev_err(&port->dev, "required endpoint is missing\n"); ++ return -ENODEV; ++ } ++ + priv = kzalloc(sizeof(struct cypress_private), GFP_KERNEL); + if (!priv) + return -ENOMEM; +@@ -606,12 +611,6 @@ static int cypress_open(struct tty_struct *tty, struct usb_serial_port *port) + cypress_set_termios(tty, port, &priv->tmp_termios); + + /* setup the port and start reading from the device */ +- if (!port->interrupt_in_urb) { +- dev_err(&port->dev, "%s - interrupt_in_urb is empty!\n", +- __func__); +- return -1; +- } +- + usb_fill_int_urb(port->interrupt_in_urb, serial->dev, + usb_rcvintpipe(serial->dev, port->interrupt_in_endpointAddress), + port->interrupt_in_urb->transfer_buffer, +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2016-4794/ANY/0.patch b/Patches/Linux_CVEs/CVE-2016-4794/ANY/0.patch index e69de29b..92a1bcb5 100644 --- a/Patches/Linux_CVEs/CVE-2016-4794/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2016-4794/ANY/0.patch @@ -0,0 +1,107 @@ +From 6710e594f71ccaad8101bc64321152af7cd9ea28 Mon Sep 17 00:00:00 2001 +From: Tejun Heo +Date: Wed, 25 May 2016 11:48:25 -0400 +Subject: percpu: fix synchronization between synchronous map extension and + chunk destruction + +For non-atomic allocations, pcpu_alloc() can try to extend the area +map synchronously after dropping pcpu_lock; however, the extension +wasn't synchronized against chunk destruction and the chunk might get +freed while extension is in progress. + +This patch fixes the bug by putting most of non-atomic allocations +under pcpu_alloc_mutex to synchronize against pcpu_balance_work which +is responsible for async chunk management including destruction. + +Signed-off-by: Tejun Heo +Reported-and-tested-by: Alexei Starovoitov +Reported-by: Vlastimil Babka +Reported-by: Sasha Levin +Cc: stable@vger.kernel.org # v3.18+ +Fixes: 1a4d76076cda ("percpu: implement asynchronous chunk population") +--- + mm/percpu.c | 16 ++++++++-------- + 1 file changed, 8 insertions(+), 8 deletions(-) + +diff --git a/mm/percpu.c b/mm/percpu.c +index b1d2a38..9903830 100644 +--- a/mm/percpu.c ++++ b/mm/percpu.c +@@ -162,7 +162,7 @@ static struct pcpu_chunk *pcpu_reserved_chunk; + static int pcpu_reserved_chunk_limit; + + static DEFINE_SPINLOCK(pcpu_lock); /* all internal data structures */ +-static DEFINE_MUTEX(pcpu_alloc_mutex); /* chunk create/destroy, [de]pop */ ++static DEFINE_MUTEX(pcpu_alloc_mutex); /* chunk create/destroy, [de]pop, map ext */ + + static struct list_head *pcpu_slot __read_mostly; /* chunk list slots */ + +@@ -444,6 +444,8 @@ static int pcpu_extend_area_map(struct pcpu_chunk *chunk, int new_alloc) + size_t old_size = 0, new_size = new_alloc * sizeof(new[0]); + unsigned long flags; + ++ lockdep_assert_held(&pcpu_alloc_mutex); ++ + new = pcpu_mem_zalloc(new_size); + if (!new) + return -ENOMEM; +@@ -890,6 +892,9 @@ static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved, + return NULL; + } + ++ if (!is_atomic) ++ mutex_lock(&pcpu_alloc_mutex); ++ + spin_lock_irqsave(&pcpu_lock, flags); + + /* serve reserved allocations from the reserved chunk if available */ +@@ -962,12 +967,9 @@ restart: + if (is_atomic) + goto fail; + +- mutex_lock(&pcpu_alloc_mutex); +- + if (list_empty(&pcpu_slot[pcpu_nr_slots - 1])) { + chunk = pcpu_create_chunk(); + if (!chunk) { +- mutex_unlock(&pcpu_alloc_mutex); + err = "failed to allocate new chunk"; + goto fail; + } +@@ -978,7 +980,6 @@ restart: + spin_lock_irqsave(&pcpu_lock, flags); + } + +- mutex_unlock(&pcpu_alloc_mutex); + goto restart; + + area_found: +@@ -988,8 +989,6 @@ area_found: + if (!is_atomic) { + int page_start, page_end, rs, re; + +- mutex_lock(&pcpu_alloc_mutex); +- + page_start = PFN_DOWN(off); + page_end = PFN_UP(off + size); + +@@ -1000,7 +999,6 @@ area_found: + + spin_lock_irqsave(&pcpu_lock, flags); + if (ret) { +- mutex_unlock(&pcpu_alloc_mutex); + pcpu_free_area(chunk, off, &occ_pages); + err = "failed to populate"; + goto fail_unlock; +@@ -1040,6 +1038,8 @@ fail: + /* see the flag handling in pcpu_blance_workfn() */ + pcpu_atomic_alloc_failed = true; + pcpu_schedule_balance_work(); ++ } else { ++ mutex_unlock(&pcpu_alloc_mutex); + } + return NULL; + } +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2016-4794/ANY/1.patch b/Patches/Linux_CVEs/CVE-2016-4794/ANY/1.patch index e69de29b..6ab2f0a9 100644 --- a/Patches/Linux_CVEs/CVE-2016-4794/ANY/1.patch +++ b/Patches/Linux_CVEs/CVE-2016-4794/ANY/1.patch @@ -0,0 +1,156 @@ +From 4f996e234dad488e5d9ba0858bc1bae12eff82c3 Mon Sep 17 00:00:00 2001 +From: Tejun Heo +Date: Wed, 25 May 2016 11:48:25 -0400 +Subject: percpu: fix synchronization between chunk->map_extend_work and chunk + destruction + +Atomic allocations can trigger async map extensions which is serviced +by chunk->map_extend_work. pcpu_balance_work which is responsible for +destroying idle chunks wasn't synchronizing properly against +chunk->map_extend_work and may end up freeing the chunk while the work +item is still in flight. + +This patch fixes the bug by rolling async map extension operations +into pcpu_balance_work. + +Signed-off-by: Tejun Heo +Reported-and-tested-by: Alexei Starovoitov +Reported-by: Vlastimil Babka +Reported-by: Sasha Levin +Cc: stable@vger.kernel.org # v3.18+ +Fixes: 9c824b6a172c ("percpu: make sure chunk->map array has available space") +--- + mm/percpu.c | 57 ++++++++++++++++++++++++++++++++++++--------------------- + 1 file changed, 36 insertions(+), 21 deletions(-) + +diff --git a/mm/percpu.c b/mm/percpu.c +index 0c59684..b1d2a38 100644 +--- a/mm/percpu.c ++++ b/mm/percpu.c +@@ -112,7 +112,7 @@ struct pcpu_chunk { + int map_used; /* # of map entries used before the sentry */ + int map_alloc; /* # of map entries allocated */ + int *map; /* allocation map */ +- struct work_struct map_extend_work;/* async ->map[] extension */ ++ struct list_head map_extend_list;/* on pcpu_map_extend_chunks */ + + void *data; /* chunk data */ + int first_free; /* no free below this */ +@@ -166,6 +166,9 @@ static DEFINE_MUTEX(pcpu_alloc_mutex); /* chunk create/destroy, [de]pop */ + + static struct list_head *pcpu_slot __read_mostly; /* chunk list slots */ + ++/* chunks which need their map areas extended, protected by pcpu_lock */ ++static LIST_HEAD(pcpu_map_extend_chunks); ++ + /* + * The number of empty populated pages, protected by pcpu_lock. The + * reserved chunk doesn't contribute to the count. +@@ -395,13 +398,19 @@ static int pcpu_need_to_extend(struct pcpu_chunk *chunk, bool is_atomic) + { + int margin, new_alloc; + ++ lockdep_assert_held(&pcpu_lock); ++ + if (is_atomic) { + margin = 3; + + if (chunk->map_alloc < +- chunk->map_used + PCPU_ATOMIC_MAP_MARGIN_LOW && +- pcpu_async_enabled) +- schedule_work(&chunk->map_extend_work); ++ chunk->map_used + PCPU_ATOMIC_MAP_MARGIN_LOW) { ++ if (list_empty(&chunk->map_extend_list)) { ++ list_add_tail(&chunk->map_extend_list, ++ &pcpu_map_extend_chunks); ++ pcpu_schedule_balance_work(); ++ } ++ } + } else { + margin = PCPU_ATOMIC_MAP_MARGIN_HIGH; + } +@@ -467,20 +476,6 @@ out_unlock: + return 0; + } + +-static void pcpu_map_extend_workfn(struct work_struct *work) +-{ +- struct pcpu_chunk *chunk = container_of(work, struct pcpu_chunk, +- map_extend_work); +- int new_alloc; +- +- spin_lock_irq(&pcpu_lock); +- new_alloc = pcpu_need_to_extend(chunk, false); +- spin_unlock_irq(&pcpu_lock); +- +- if (new_alloc) +- pcpu_extend_area_map(chunk, new_alloc); +-} +- + /** + * pcpu_fit_in_area - try to fit the requested allocation in a candidate area + * @chunk: chunk the candidate area belongs to +@@ -740,7 +735,7 @@ static struct pcpu_chunk *pcpu_alloc_chunk(void) + chunk->map_used = 1; + + INIT_LIST_HEAD(&chunk->list); +- INIT_WORK(&chunk->map_extend_work, pcpu_map_extend_workfn); ++ INIT_LIST_HEAD(&chunk->map_extend_list); + chunk->free_size = pcpu_unit_size; + chunk->contig_hint = pcpu_unit_size; + +@@ -1129,6 +1124,7 @@ static void pcpu_balance_workfn(struct work_struct *work) + if (chunk == list_first_entry(free_head, struct pcpu_chunk, list)) + continue; + ++ list_del_init(&chunk->map_extend_list); + list_move(&chunk->list, &to_free); + } + +@@ -1146,6 +1142,25 @@ static void pcpu_balance_workfn(struct work_struct *work) + pcpu_destroy_chunk(chunk); + } + ++ /* service chunks which requested async area map extension */ ++ do { ++ int new_alloc = 0; ++ ++ spin_lock_irq(&pcpu_lock); ++ ++ chunk = list_first_entry_or_null(&pcpu_map_extend_chunks, ++ struct pcpu_chunk, map_extend_list); ++ if (chunk) { ++ list_del_init(&chunk->map_extend_list); ++ new_alloc = pcpu_need_to_extend(chunk, false); ++ } ++ ++ spin_unlock_irq(&pcpu_lock); ++ ++ if (new_alloc) ++ pcpu_extend_area_map(chunk, new_alloc); ++ } while (chunk); ++ + /* + * Ensure there are certain number of free populated pages for + * atomic allocs. Fill up from the most packed so that atomic +@@ -1644,7 +1659,7 @@ int __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai, + */ + schunk = memblock_virt_alloc(pcpu_chunk_struct_size, 0); + INIT_LIST_HEAD(&schunk->list); +- INIT_WORK(&schunk->map_extend_work, pcpu_map_extend_workfn); ++ INIT_LIST_HEAD(&schunk->map_extend_list); + schunk->base_addr = base_addr; + schunk->map = smap; + schunk->map_alloc = ARRAY_SIZE(smap); +@@ -1673,7 +1688,7 @@ int __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai, + if (dyn_size) { + dchunk = memblock_virt_alloc(pcpu_chunk_struct_size, 0); + INIT_LIST_HEAD(&dchunk->list); +- INIT_WORK(&dchunk->map_extend_work, pcpu_map_extend_workfn); ++ INIT_LIST_HEAD(&dchunk->map_extend_list); + dchunk->base_addr = base_addr; + dchunk->map = dmap; + dchunk->map_alloc = ARRAY_SIZE(dmap); +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2016-7916/ANY/0.patch b/Patches/Linux_CVEs/CVE-2016-7916/ANY/0.patch index e69de29b..310cb3ec 100644 --- a/Patches/Linux_CVEs/CVE-2016-7916/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2016-7916/ANY/0.patch @@ -0,0 +1,56 @@ +From 8148a73c9901a8794a50f950083c00ccf97d43b3 Mon Sep 17 00:00:00 2001 +From: Mathias Krause +Date: Thu, 5 May 2016 16:22:26 -0700 +Subject: proc: prevent accessing /proc//environ until it's ready + +If /proc//environ gets read before the envp[] array is fully set up +in create_{aout,elf,elf_fdpic,flat}_tables(), we might end up trying to +read more bytes than are actually written, as env_start will already be +set but env_end will still be zero, making the range calculation +underflow, allowing to read beyond the end of what has been written. + +Fix this as it is done for /proc//cmdline by testing env_end for +zero. It is, apparently, intentionally set last in create_*_tables(). + +This bug was found by the PaX size_overflow plugin that detected the +arithmetic underflow of 'this_len = env_end - (env_start + src)' when +env_end is still zero. + +The expected consequence is that userland trying to access +/proc//environ of a not yet fully set up process may get +inconsistent data as we're in the middle of copying in the environment +variables. + +Fixes: https://forums.grsecurity.net/viewtopic.php?f=3&t=4363 +Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=116461 +Signed-off-by: Mathias Krause +Cc: Emese Revfy +Cc: Pax Team +Cc: Al Viro +Cc: Mateusz Guzik +Cc: Alexey Dobriyan +Cc: Cyrill Gorcunov +Cc: Jarod Wilson +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +--- + fs/proc/base.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +diff --git a/fs/proc/base.c b/fs/proc/base.c +index b1755b2..92e37e2 100644 +--- a/fs/proc/base.c ++++ b/fs/proc/base.c +@@ -955,7 +955,8 @@ static ssize_t environ_read(struct file *file, char __user *buf, + struct mm_struct *mm = file->private_data; + unsigned long env_start, env_end; + +- if (!mm) ++ /* Ensure the process spawned far enough to have an environment. */ ++ if (!mm || !mm->env_end) + return 0; + + page = (char *)__get_free_page(GFP_TEMPORARY); +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2016-8650/ANY/0.patch b/Patches/Linux_CVEs/CVE-2016-8650/ANY/0.patch index e69de29b..e449ab06 100644 --- a/Patches/Linux_CVEs/CVE-2016-8650/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2016-8650/ANY/0.patch @@ -0,0 +1,100 @@ +From f5527fffff3f002b0a6b376163613b82f69de073 Mon Sep 17 00:00:00 2001 +From: Andrey Ryabinin +Date: Thu, 24 Nov 2016 13:23:10 +0000 +Subject: mpi: Fix NULL ptr dereference in mpi_powm() [ver #3] + +This fixes CVE-2016-8650. + +If mpi_powm() is given a zero exponent, it wants to immediately return +either 1 or 0, depending on the modulus. However, if the result was +initalised with zero limb space, no limbs space is allocated and a +NULL-pointer exception ensues. + +Fix this by allocating a minimal amount of limb space for the result when +the 0-exponent case when the result is 1 and not touching the limb space +when the result is 0. + +This affects the use of RSA keys and X.509 certificates that carry them. + +BUG: unable to handle kernel NULL pointer dereference at (null) +IP: [] mpi_powm+0x32/0x7e6 +PGD 0 +Oops: 0002 [#1] SMP +Modules linked in: +CPU: 3 PID: 3014 Comm: keyctl Not tainted 4.9.0-rc6-fscache+ #278 +Hardware name: ASUS All Series/H97-PLUS, BIOS 2306 10/09/2014 +task: ffff8804011944c0 task.stack: ffff880401294000 +RIP: 0010:[] [] mpi_powm+0x32/0x7e6 +RSP: 0018:ffff880401297ad8 EFLAGS: 00010212 +RAX: 0000000000000000 RBX: ffff88040868bec0 RCX: ffff88040868bba0 +RDX: ffff88040868b260 RSI: ffff88040868bec0 RDI: ffff88040868bee0 +RBP: ffff880401297ba8 R08: 0000000000000000 R09: 0000000000000000 +R10: 0000000000000047 R11: ffffffff8183b210 R12: 0000000000000000 +R13: ffff8804087c7600 R14: 000000000000001f R15: ffff880401297c50 +FS: 00007f7a7918c700(0000) GS:ffff88041fb80000(0000) knlGS:0000000000000000 +CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 +CR2: 0000000000000000 CR3: 0000000401250000 CR4: 00000000001406e0 +Stack: + ffff88040868bec0 0000000000000020 ffff880401297b00 ffffffff81376cd4 + 0000000000000100 ffff880401297b10 ffffffff81376d12 ffff880401297b30 + ffffffff81376f37 0000000000000100 0000000000000000 ffff880401297ba8 +Call Trace: + [] ? __sg_page_iter_next+0x43/0x66 + [] ? sg_miter_get_next_page+0x1b/0x5d + [] ? sg_miter_next+0x17/0xbd + [] ? mpi_read_raw_from_sgl+0xf2/0x146 + [] rsa_verify+0x9d/0xee + [] ? pkcs1pad_sg_set_buf+0x2e/0xbb + [] pkcs1pad_verify+0xc0/0xe1 + [] public_key_verify_signature+0x1b0/0x228 + [] x509_check_for_self_signed+0xa1/0xc4 + [] x509_cert_parse+0x167/0x1a1 + [] x509_key_preparse+0x21/0x1a1 + [] asymmetric_key_preparse+0x34/0x61 + [] key_create_or_update+0x145/0x399 + [] SyS_add_key+0x154/0x19e + [] do_syscall_64+0x80/0x191 + [] entry_SYSCALL64_slow_path+0x25/0x25 +Code: 56 41 55 41 54 53 48 81 ec a8 00 00 00 44 8b 71 04 8b 42 04 4c 8b 67 18 45 85 f6 89 45 80 0f 84 b4 06 00 00 85 c0 75 2f 41 ff ce <49> c7 04 24 01 00 00 00 b0 01 75 0b 48 8b 41 18 48 83 38 01 0f +RIP [] mpi_powm+0x32/0x7e6 + RSP +CR2: 0000000000000000 +---[ end trace d82015255d4a5d8d ]--- + +Basically, this is a backport of a libgcrypt patch: + + http://git.gnupg.org/cgi-bin/gitweb.cgi?p=libgcrypt.git;a=patch;h=6e1adb05d290aeeb1c230c763970695f4a538526 + +Fixes: cdec9cb5167a ("crypto: GnuPG based MPI lib - source files (part 1)") +Signed-off-by: Andrey Ryabinin +Signed-off-by: David Howells +cc: Dmitry Kasatkin +cc: linux-ima-devel@lists.sourceforge.net +cc: stable@vger.kernel.org +Signed-off-by: James Morris +--- + lib/mpi/mpi-pow.c | 7 ++++++- + 1 file changed, 6 insertions(+), 1 deletion(-) + +diff --git a/lib/mpi/mpi-pow.c b/lib/mpi/mpi-pow.c +index 5464c87..e24388a 100644 +--- a/lib/mpi/mpi-pow.c ++++ b/lib/mpi/mpi-pow.c +@@ -64,8 +64,13 @@ int mpi_powm(MPI res, MPI base, MPI exp, MPI mod) + if (!esize) { + /* Exponent is zero, result is 1 mod MOD, i.e., 1 or 0 + * depending on if MOD equals 1. */ +- rp[0] = 1; + res->nlimbs = (msize == 1 && mod->d[0] == 1) ? 0 : 1; ++ if (res->nlimbs) { ++ if (mpi_resize(res, 1) < 0) ++ goto enomem; ++ rp = res->d; ++ rp[0] = 1; ++ } + res->sign = 0; + goto leave; + } +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2016-9191/ANY/0.patch b/Patches/Linux_CVEs/CVE-2016-9191/ANY/0.patch index e69de29b..cb95d37f 100644 --- a/Patches/Linux_CVEs/CVE-2016-9191/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2016-9191/ANY/0.patch @@ -0,0 +1,87 @@ +From 93362fa47fe98b62e4a34ab408c4a418432e7939 Mon Sep 17 00:00:00 2001 +From: Zhou Chengming +Date: Fri, 6 Jan 2017 09:32:32 +0800 +Subject: sysctl: Drop reference added by grab_header in proc_sys_readdir + +Fixes CVE-2016-9191, proc_sys_readdir doesn't drop reference +added by grab_header when return from !dir_emit_dots path. +It can cause any path called unregister_sysctl_table will +wait forever. + +The calltrace of CVE-2016-9191: + +[ 5535.960522] Call Trace: +[ 5535.963265] [] schedule+0x3f/0xa0 +[ 5535.968817] [] schedule_timeout+0x3db/0x6f0 +[ 5535.975346] [] ? wait_for_completion+0x45/0x130 +[ 5535.982256] [] wait_for_completion+0xc3/0x130 +[ 5535.988972] [] ? wake_up_q+0x80/0x80 +[ 5535.994804] [] drop_sysctl_table+0xc4/0xe0 +[ 5536.001227] [] drop_sysctl_table+0x77/0xe0 +[ 5536.007648] [] unregister_sysctl_table+0x4d/0xa0 +[ 5536.014654] [] unregister_sysctl_table+0x7f/0xa0 +[ 5536.021657] [] unregister_sched_domain_sysctl+0x15/0x40 +[ 5536.029344] [] partition_sched_domains+0x44/0x450 +[ 5536.036447] [] ? __mutex_unlock_slowpath+0x111/0x1f0 +[ 5536.043844] [] rebuild_sched_domains_locked+0x64/0xb0 +[ 5536.051336] [] update_flag+0x11d/0x210 +[ 5536.057373] [] ? mutex_lock_nested+0x2df/0x450 +[ 5536.064186] [] ? cpuset_css_offline+0x1b/0x60 +[ 5536.070899] [] ? trace_hardirqs_on+0xd/0x10 +[ 5536.077420] [] ? mutex_lock_nested+0x2df/0x450 +[ 5536.084234] [] ? css_killed_work_fn+0x25/0x220 +[ 5536.091049] [] cpuset_css_offline+0x35/0x60 +[ 5536.097571] [] css_killed_work_fn+0x5c/0x220 +[ 5536.104207] [] process_one_work+0x1df/0x710 +[ 5536.110736] [] ? process_one_work+0x160/0x710 +[ 5536.117461] [] worker_thread+0x12b/0x4a0 +[ 5536.123697] [] ? process_one_work+0x710/0x710 +[ 5536.130426] [] kthread+0xfe/0x120 +[ 5536.135991] [] ret_from_fork+0x1f/0x40 +[ 5536.142041] [] ? kthread_create_on_node+0x230/0x230 + +One cgroup maintainer mentioned that "cgroup is trying to offline +a cpuset css, which takes place under cgroup_mutex. The offlining +ends up trying to drain active usages of a sysctl table which apprently +is not happening." +The real reason is that proc_sys_readdir doesn't drop reference added +by grab_header when return from !dir_emit_dots path. So this cpuset +offline path will wait here forever. + +See here for details: http://www.openwall.com/lists/oss-security/2016/11/04/13 + +Fixes: f0c3b5093add ("[readdir] convert procfs") +Cc: stable@vger.kernel.org +Reported-by: CAI Qian +Tested-by: Yang Shukui +Signed-off-by: Zhou Chengming +Acked-by: Al Viro +Signed-off-by: Eric W. Biederman +--- + fs/proc/proc_sysctl.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c +index 55313d9..d4e37ac 100644 +--- a/fs/proc/proc_sysctl.c ++++ b/fs/proc/proc_sysctl.c +@@ -709,7 +709,7 @@ static int proc_sys_readdir(struct file *file, struct dir_context *ctx) + ctl_dir = container_of(head, struct ctl_dir, header); + + if (!dir_emit_dots(file, ctx)) +- return 0; ++ goto out; + + pos = 2; + +@@ -719,6 +719,7 @@ static int proc_sys_readdir(struct file *file, struct dir_context *ctx) + break; + } + } ++out: + sysctl_head_finish(head); + return 0; + } +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2016-9754/ANY/0.patch b/Patches/Linux_CVEs/CVE-2016-9754/ANY/0.patch index e69de29b..50952d96 100644 --- a/Patches/Linux_CVEs/CVE-2016-9754/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2016-9754/ANY/0.patch @@ -0,0 +1,89 @@ +From 59643d1535eb220668692a5359de22545af579f6 Mon Sep 17 00:00:00 2001 +From: "Steven Rostedt (Red Hat)" +Date: Fri, 13 May 2016 09:34:12 -0400 +Subject: ring-buffer: Prevent overflow of size in ring_buffer_resize() + +If the size passed to ring_buffer_resize() is greater than MAX_LONG - BUF_PAGE_SIZE +then the DIV_ROUND_UP() will return zero. + +Here's the details: + + # echo 18014398509481980 > /sys/kernel/debug/tracing/buffer_size_kb + +tracing_entries_write() processes this and converts kb to bytes. + + 18014398509481980 << 10 = 18446744073709547520 + +and this is passed to ring_buffer_resize() as unsigned long size. + + size = DIV_ROUND_UP(size, BUF_PAGE_SIZE); + +Where DIV_ROUND_UP(a, b) is (a + b - 1)/b + +BUF_PAGE_SIZE is 4080 and here + + 18446744073709547520 + 4080 - 1 = 18446744073709551599 + +where 18446744073709551599 is still smaller than 2^64 + + 2^64 - 18446744073709551599 = 17 + +But now 18446744073709551599 / 4080 = 4521260802379792 + +and size = size * 4080 = 18446744073709551360 + +This is checked to make sure its still greater than 2 * 4080, +which it is. + +Then we convert to the number of buffer pages needed. + + nr_page = DIV_ROUND_UP(size, BUF_PAGE_SIZE) + +but this time size is 18446744073709551360 and + + 2^64 - (18446744073709551360 + 4080 - 1) = -3823 + +Thus it overflows and the resulting number is less than 4080, which makes + + 3823 / 4080 = 0 + +an nr_pages is set to this. As we already checked against the minimum that +nr_pages may be, this causes the logic to fail as well, and we crash the +kernel. + +There's no reason to have the two DIV_ROUND_UP() (that's just result of +historical code changes), clean up the code and fix this bug. + +Cc: stable@vger.kernel.org # 3.5+ +Fixes: 83f40318dab00 ("ring-buffer: Make removal of ring buffer pages atomic") +Signed-off-by: Steven Rostedt +--- + kernel/trace/ring_buffer.c | 9 ++++----- + 1 file changed, 4 insertions(+), 5 deletions(-) + +diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c +index 99d64cd..9c14373 100644 +--- a/kernel/trace/ring_buffer.c ++++ b/kernel/trace/ring_buffer.c +@@ -1657,14 +1657,13 @@ int ring_buffer_resize(struct ring_buffer *buffer, unsigned long size, + !cpumask_test_cpu(cpu_id, buffer->cpumask)) + return size; + +- size = DIV_ROUND_UP(size, BUF_PAGE_SIZE); +- size *= BUF_PAGE_SIZE; ++ nr_pages = DIV_ROUND_UP(size, BUF_PAGE_SIZE); + + /* we need a minimum of two pages */ +- if (size < BUF_PAGE_SIZE * 2) +- size = BUF_PAGE_SIZE * 2; ++ if (nr_pages < 2) ++ nr_pages = 2; + +- nr_pages = DIV_ROUND_UP(size, BUF_PAGE_SIZE); ++ size = nr_pages * BUF_PAGE_SIZE; + + /* + * Don't succeed if resizing is disabled, as a reader might be +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2016-9793/ANY/0.patch b/Patches/Linux_CVEs/CVE-2016-9793/ANY/0.patch index e69de29b..2f44e337 100644 --- a/Patches/Linux_CVEs/CVE-2016-9793/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2016-9793/ANY/0.patch @@ -0,0 +1,49 @@ +From b98b0bc8c431e3ceb4b26b0dfc8db509518fb290 Mon Sep 17 00:00:00 2001 +From: Eric Dumazet +Date: Fri, 2 Dec 2016 09:44:53 -0800 +Subject: net: avoid signed overflows for SO_{SND|RCV}BUFFORCE + +CAP_NET_ADMIN users should not be allowed to set negative +sk_sndbuf or sk_rcvbuf values, as it can lead to various memory +corruptions, crashes, OOM... + +Note that before commit 82981930125a ("net: cleanups in +sock_setsockopt()"), the bug was even more serious, since SO_SNDBUF +and SO_RCVBUF were vulnerable. + +This needs to be backported to all known linux kernels. + +Again, many thanks to syzkaller team for discovering this gem. + +Signed-off-by: Eric Dumazet +Reported-by: Andrey Konovalov +Signed-off-by: David S. Miller +--- + net/core/sock.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/net/core/sock.c b/net/core/sock.c +index 5e3ca41..00a074d 100644 +--- a/net/core/sock.c ++++ b/net/core/sock.c +@@ -715,7 +715,7 @@ int sock_setsockopt(struct socket *sock, int level, int optname, + val = min_t(u32, val, sysctl_wmem_max); + set_sndbuf: + sk->sk_userlocks |= SOCK_SNDBUF_LOCK; +- sk->sk_sndbuf = max_t(u32, val * 2, SOCK_MIN_SNDBUF); ++ sk->sk_sndbuf = max_t(int, val * 2, SOCK_MIN_SNDBUF); + /* Wake up sending tasks if we upped the value. */ + sk->sk_write_space(sk); + break; +@@ -751,7 +751,7 @@ set_rcvbuf: + * returning the value we actually used in getsockopt + * is the most desirable behavior. + */ +- sk->sk_rcvbuf = max_t(u32, val * 2, SOCK_MIN_RCVBUF); ++ sk->sk_rcvbuf = max_t(int, val * 2, SOCK_MIN_RCVBUF); + break; + + case SO_RCVBUFFORCE: +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2016-9794/ANY/0.patch b/Patches/Linux_CVEs/CVE-2016-9794/ANY/0.patch index e69de29b..0c23cfde 100644 --- a/Patches/Linux_CVEs/CVE-2016-9794/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2016-9794/ANY/0.patch @@ -0,0 +1,44 @@ +From 3aa02cb664c5fb1042958c8d1aa8c35055a2ebc4 Mon Sep 17 00:00:00 2001 +From: Takashi Iwai +Date: Thu, 14 Apr 2016 18:02:37 +0200 +Subject: ALSA: pcm : Call kill_fasync() in stream lock + +Currently kill_fasync() is called outside the stream lock in +snd_pcm_period_elapsed(). This is potentially racy, since the stream +may get released even during the irq handler is running. Although +snd_pcm_release_substream() calls snd_pcm_drop(), this doesn't +guarantee that the irq handler finishes, thus the kill_fasync() call +outside the stream spin lock may be invoked after the substream is +detached, as recently reported by KASAN. + +As a quick workaround, move kill_fasync() call inside the stream +lock. The fasync is rarely used interface, so this shouldn't have a +big impact from the performance POV. + +Ideally, we should implement some sync mechanism for the proper finish +of stream and irq handler. But this oneliner should suffice for most +cases, so far. + +Reported-by: Baozeng Ding +Signed-off-by: Takashi Iwai +--- + sound/core/pcm_lib.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c +index 3a9b66c..0aca397 100644 +--- a/sound/core/pcm_lib.c ++++ b/sound/core/pcm_lib.c +@@ -1886,8 +1886,8 @@ void snd_pcm_period_elapsed(struct snd_pcm_substream *substream) + snd_timer_interrupt(substream->timer, 1); + #endif + _end: +- snd_pcm_stream_unlock_irqrestore(substream, flags); + kill_fasync(&runtime->fasync, SIGIO, POLL_IN); ++ snd_pcm_stream_unlock_irqrestore(substream, flags); + } + + EXPORT_SYMBOL(snd_pcm_period_elapsed); +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2016-9806/ANY/0.patch b/Patches/Linux_CVEs/CVE-2016-9806/ANY/0.patch index e69de29b..9c899b7e 100644 --- a/Patches/Linux_CVEs/CVE-2016-9806/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2016-9806/ANY/0.patch @@ -0,0 +1,50 @@ +From 92964c79b357efd980812c4de5c1fd2ec8bb5520 Mon Sep 17 00:00:00 2001 +From: Herbert Xu +Date: Mon, 16 May 2016 17:28:16 +0800 +Subject: netlink: Fix dump skb leak/double free + +When we free cb->skb after a dump, we do it after releasing the +lock. This means that a new dump could have started in the time +being and we'll end up freeing their skb instead of ours. + +This patch saves the skb and module before we unlock so we free +the right memory. + +Fixes: 16b304f3404f ("netlink: Eliminate kmalloc in netlink dump operation.") +Reported-by: Baozeng Ding +Signed-off-by: Herbert Xu +Acked-by: Cong Wang +Signed-off-by: David S. Miller +--- + net/netlink/af_netlink.c | 7 +++++-- + 1 file changed, 5 insertions(+), 2 deletions(-) + +diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c +index aeefe12..627f898 100644 +--- a/net/netlink/af_netlink.c ++++ b/net/netlink/af_netlink.c +@@ -2059,6 +2059,7 @@ static int netlink_dump(struct sock *sk) + struct netlink_callback *cb; + struct sk_buff *skb = NULL; + struct nlmsghdr *nlh; ++ struct module *module; + int len, err = -ENOBUFS; + int alloc_min_size; + int alloc_size; +@@ -2134,9 +2135,11 @@ static int netlink_dump(struct sock *sk) + cb->done(cb); + + nlk->cb_running = false; ++ module = cb->module; ++ skb = cb->skb; + mutex_unlock(nlk->cb_mutex); +- module_put(cb->module); +- consume_skb(cb->skb); ++ module_put(module); ++ consume_skb(skb); + return 0; + + errout_skb: +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2017-5967/ANY/0.patch b/Patches/Linux_CVEs/CVE-2017-5967/ANY/0.patch index e69de29b..b7bd1067 100644 --- a/Patches/Linux_CVEs/CVE-2017-5967/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2017-5967/ANY/0.patch @@ -0,0 +1,939 @@ +From dfb4357da6ddbdf57d583ba64361c9d792b0e0b1 Mon Sep 17 00:00:00 2001 +From: Kees Cook +Date: Wed, 8 Feb 2017 11:26:59 -0800 +Subject: time: Remove CONFIG_TIMER_STATS + +Currently CONFIG_TIMER_STATS exposes process information across namespaces: + +kernel/time/timer_list.c print_timer(): + + SEQ_printf(m, ", %s/%d", tmp, timer->start_pid); + +/proc/timer_list: + + #11: <0000000000000000>, hrtimer_wakeup, S:01, do_nanosleep, cron/2570 + +Given that the tracer can give the same information, this patch entirely +removes CONFIG_TIMER_STATS. + +Suggested-by: Thomas Gleixner +Signed-off-by: Kees Cook +Acked-by: John Stultz +Cc: Nicolas Pitre +Cc: linux-doc@vger.kernel.org +Cc: Lai Jiangshan +Cc: Shuah Khan +Cc: Xing Gao +Cc: Jonathan Corbet +Cc: Jessica Frazelle +Cc: kernel-hardening@lists.openwall.com +Cc: Nicolas Iooss +Cc: "Paul E. McKenney" +Cc: Petr Mladek +Cc: Richard Cochran +Cc: Tejun Heo +Cc: Michal Marek +Cc: Josh Poimboeuf +Cc: Dmitry Vyukov +Cc: Oleg Nesterov +Cc: "Eric W. Biederman" +Cc: Olof Johansson +Cc: Andrew Morton +Cc: linux-api@vger.kernel.org +Cc: Arjan van de Ven +Link: http://lkml.kernel.org/r/20170208192659.GA32582@beast +Signed-off-by: Thomas Gleixner +--- + Documentation/timers/timer_stats.txt | 73 ------ + include/linux/hrtimer.h | 11 - + include/linux/timer.h | 45 ---- + kernel/kthread.c | 1 - + kernel/time/Makefile | 1 - + kernel/time/hrtimer.c | 38 ---- + kernel/time/timer.c | 48 +--- + kernel/time/timer_list.c | 10 - + kernel/time/timer_stats.c | 425 ----------------------------------- + kernel/workqueue.c | 2 - + lib/Kconfig.debug | 14 -- + 11 files changed, 2 insertions(+), 666 deletions(-) + delete mode 100644 Documentation/timers/timer_stats.txt + delete mode 100644 kernel/time/timer_stats.c + +diff --git a/Documentation/timers/timer_stats.txt b/Documentation/timers/timer_stats.txt +deleted file mode 100644 +index de835ee..0000000 +--- a/Documentation/timers/timer_stats.txt ++++ /dev/null +@@ -1,73 +0,0 @@ +-timer_stats - timer usage statistics +------------------------------------- +- +-timer_stats is a debugging facility to make the timer (ab)usage in a Linux +-system visible to kernel and userspace developers. If enabled in the config +-but not used it has almost zero runtime overhead, and a relatively small +-data structure overhead. Even if collection is enabled runtime all the +-locking is per-CPU and lookup is hashed. +- +-timer_stats should be used by kernel and userspace developers to verify that +-their code does not make unduly use of timers. This helps to avoid unnecessary +-wakeups, which should be avoided to optimize power consumption. +- +-It can be enabled by CONFIG_TIMER_STATS in the "Kernel hacking" configuration +-section. +- +-timer_stats collects information about the timer events which are fired in a +-Linux system over a sample period: +- +-- the pid of the task(process) which initialized the timer +-- the name of the process which initialized the timer +-- the function where the timer was initialized +-- the callback function which is associated to the timer +-- the number of events (callbacks) +- +-timer_stats adds an entry to /proc: /proc/timer_stats +- +-This entry is used to control the statistics functionality and to read out the +-sampled information. +- +-The timer_stats functionality is inactive on bootup. +- +-To activate a sample period issue: +-# echo 1 >/proc/timer_stats +- +-To stop a sample period issue: +-# echo 0 >/proc/timer_stats +- +-The statistics can be retrieved by: +-# cat /proc/timer_stats +- +-While sampling is enabled, each readout from /proc/timer_stats will see +-newly updated statistics. Once sampling is disabled, the sampled information +-is kept until a new sample period is started. This allows multiple readouts. +- +-Sample output of /proc/timer_stats: +- +-Timerstats sample period: 3.888770 s +- 12, 0 swapper hrtimer_stop_sched_tick (hrtimer_sched_tick) +- 15, 1 swapper hcd_submit_urb (rh_timer_func) +- 4, 959 kedac schedule_timeout (process_timeout) +- 1, 0 swapper page_writeback_init (wb_timer_fn) +- 28, 0 swapper hrtimer_stop_sched_tick (hrtimer_sched_tick) +- 22, 2948 IRQ 4 tty_flip_buffer_push (delayed_work_timer_fn) +- 3, 3100 bash schedule_timeout (process_timeout) +- 1, 1 swapper queue_delayed_work_on (delayed_work_timer_fn) +- 1, 1 swapper queue_delayed_work_on (delayed_work_timer_fn) +- 1, 1 swapper neigh_table_init_no_netlink (neigh_periodic_timer) +- 1, 2292 ip __netdev_watchdog_up (dev_watchdog) +- 1, 23 events/1 do_cache_clean (delayed_work_timer_fn) +-90 total events, 30.0 events/sec +- +-The first column is the number of events, the second column the pid, the third +-column is the name of the process. The forth column shows the function which +-initialized the timer and in parenthesis the callback function which was +-executed on expiry. +- +- Thomas, Ingo +- +-Added flag to indicate 'deferrable timer' in /proc/timer_stats. A deferrable +-timer will appear as follows +- 10D, 1 swapper queue_delayed_work_on (delayed_work_timer_fn) +- +diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h +index cdab81b..e52b427 100644 +--- a/include/linux/hrtimer.h ++++ b/include/linux/hrtimer.h +@@ -88,12 +88,6 @@ enum hrtimer_restart { + * @base: pointer to the timer base (per cpu and per clock) + * @state: state information (See bit values above) + * @is_rel: Set if the timer was armed relative +- * @start_pid: timer statistics field to store the pid of the task which +- * started the timer +- * @start_site: timer statistics field to store the site where the timer +- * was started +- * @start_comm: timer statistics field to store the name of the process which +- * started the timer + * + * The hrtimer structure must be initialized by hrtimer_init() + */ +@@ -104,11 +98,6 @@ struct hrtimer { + struct hrtimer_clock_base *base; + u8 state; + u8 is_rel; +-#ifdef CONFIG_TIMER_STATS +- int start_pid; +- void *start_site; +- char start_comm[16]; +-#endif + }; + + /** +diff --git a/include/linux/timer.h b/include/linux/timer.h +index 51d601f..5a209b8 100644 +--- a/include/linux/timer.h ++++ b/include/linux/timer.h +@@ -20,11 +20,6 @@ struct timer_list { + unsigned long data; + u32 flags; + +-#ifdef CONFIG_TIMER_STATS +- int start_pid; +- void *start_site; +- char start_comm[16]; +-#endif + #ifdef CONFIG_LOCKDEP + struct lockdep_map lockdep_map; + #endif +@@ -197,46 +192,6 @@ extern int mod_timer_pending(struct timer_list *timer, unsigned long expires); + */ + #define NEXT_TIMER_MAX_DELTA ((1UL << 30) - 1) + +-/* +- * Timer-statistics info: +- */ +-#ifdef CONFIG_TIMER_STATS +- +-extern int timer_stats_active; +- +-extern void init_timer_stats(void); +- +-extern void timer_stats_update_stats(void *timer, pid_t pid, void *startf, +- void *timerf, char *comm, u32 flags); +- +-extern void __timer_stats_timer_set_start_info(struct timer_list *timer, +- void *addr); +- +-static inline void timer_stats_timer_set_start_info(struct timer_list *timer) +-{ +- if (likely(!timer_stats_active)) +- return; +- __timer_stats_timer_set_start_info(timer, __builtin_return_address(0)); +-} +- +-static inline void timer_stats_timer_clear_start_info(struct timer_list *timer) +-{ +- timer->start_site = NULL; +-} +-#else +-static inline void init_timer_stats(void) +-{ +-} +- +-static inline void timer_stats_timer_set_start_info(struct timer_list *timer) +-{ +-} +- +-static inline void timer_stats_timer_clear_start_info(struct timer_list *timer) +-{ +-} +-#endif +- + extern void add_timer(struct timer_list *timer); + + extern int try_to_del_timer_sync(struct timer_list *timer); +diff --git a/kernel/kthread.c b/kernel/kthread.c +index 2318fba..8461a43 100644 +--- a/kernel/kthread.c ++++ b/kernel/kthread.c +@@ -850,7 +850,6 @@ void __kthread_queue_delayed_work(struct kthread_worker *worker, + + list_add(&work->node, &worker->delayed_work_list); + work->worker = worker; +- timer_stats_timer_set_start_info(&dwork->timer); + timer->expires = jiffies + delay; + add_timer(timer); + } +diff --git a/kernel/time/Makefile b/kernel/time/Makefile +index 976840d..938dbf3 100644 +--- a/kernel/time/Makefile ++++ b/kernel/time/Makefile +@@ -15,6 +15,5 @@ ifeq ($(CONFIG_GENERIC_CLOCKEVENTS_BROADCAST),y) + endif + obj-$(CONFIG_GENERIC_SCHED_CLOCK) += sched_clock.o + obj-$(CONFIG_TICK_ONESHOT) += tick-oneshot.o tick-sched.o +-obj-$(CONFIG_TIMER_STATS) += timer_stats.o + obj-$(CONFIG_DEBUG_FS) += timekeeping_debug.o + obj-$(CONFIG_TEST_UDELAY) += test_udelay.o +diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c +index c6ecedd..edabde6 100644 +--- a/kernel/time/hrtimer.c ++++ b/kernel/time/hrtimer.c +@@ -766,34 +766,6 @@ void hrtimers_resume(void) + clock_was_set_delayed(); + } + +-static inline void timer_stats_hrtimer_set_start_info(struct hrtimer *timer) +-{ +-#ifdef CONFIG_TIMER_STATS +- if (timer->start_site) +- return; +- timer->start_site = __builtin_return_address(0); +- memcpy(timer->start_comm, current->comm, TASK_COMM_LEN); +- timer->start_pid = current->pid; +-#endif +-} +- +-static inline void timer_stats_hrtimer_clear_start_info(struct hrtimer *timer) +-{ +-#ifdef CONFIG_TIMER_STATS +- timer->start_site = NULL; +-#endif +-} +- +-static inline void timer_stats_account_hrtimer(struct hrtimer *timer) +-{ +-#ifdef CONFIG_TIMER_STATS +- if (likely(!timer_stats_active)) +- return; +- timer_stats_update_stats(timer, timer->start_pid, timer->start_site, +- timer->function, timer->start_comm, 0); +-#endif +-} +- + /* + * Counterpart to lock_hrtimer_base above: + */ +@@ -932,7 +904,6 @@ remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base, bool rest + * rare case and less expensive than a smp call. + */ + debug_deactivate(timer); +- timer_stats_hrtimer_clear_start_info(timer); + reprogram = base->cpu_base == this_cpu_ptr(&hrtimer_bases); + + if (!restart) +@@ -990,8 +961,6 @@ void hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim, + /* Switch the timer base, if necessary: */ + new_base = switch_hrtimer_base(timer, base, mode & HRTIMER_MODE_PINNED); + +- timer_stats_hrtimer_set_start_info(timer); +- + leftmost = enqueue_hrtimer(timer, new_base); + if (!leftmost) + goto unlock; +@@ -1128,12 +1097,6 @@ static void __hrtimer_init(struct hrtimer *timer, clockid_t clock_id, + base = hrtimer_clockid_to_base(clock_id); + timer->base = &cpu_base->clock_base[base]; + timerqueue_init(&timer->node); +- +-#ifdef CONFIG_TIMER_STATS +- timer->start_site = NULL; +- timer->start_pid = -1; +- memset(timer->start_comm, 0, TASK_COMM_LEN); +-#endif + } + + /** +@@ -1217,7 +1180,6 @@ static void __run_hrtimer(struct hrtimer_cpu_base *cpu_base, + raw_write_seqcount_barrier(&cpu_base->seq); + + __remove_hrtimer(timer, base, HRTIMER_STATE_INACTIVE, 0); +- timer_stats_account_hrtimer(timer); + fn = timer->function; + + /* +diff --git a/kernel/time/timer.c b/kernel/time/timer.c +index ec33a69..82a6bfa 100644 +--- a/kernel/time/timer.c ++++ b/kernel/time/timer.c +@@ -571,38 +571,6 @@ internal_add_timer(struct timer_base *base, struct timer_list *timer) + trigger_dyntick_cpu(base, timer); + } + +-#ifdef CONFIG_TIMER_STATS +-void __timer_stats_timer_set_start_info(struct timer_list *timer, void *addr) +-{ +- if (timer->start_site) +- return; +- +- timer->start_site = addr; +- memcpy(timer->start_comm, current->comm, TASK_COMM_LEN); +- timer->start_pid = current->pid; +-} +- +-static void timer_stats_account_timer(struct timer_list *timer) +-{ +- void *site; +- +- /* +- * start_site can be concurrently reset by +- * timer_stats_timer_clear_start_info() +- */ +- site = READ_ONCE(timer->start_site); +- if (likely(!site)) +- return; +- +- timer_stats_update_stats(timer, timer->start_pid, site, +- timer->function, timer->start_comm, +- timer->flags); +-} +- +-#else +-static void timer_stats_account_timer(struct timer_list *timer) {} +-#endif +- + #ifdef CONFIG_DEBUG_OBJECTS_TIMERS + + static struct debug_obj_descr timer_debug_descr; +@@ -789,11 +757,6 @@ static void do_init_timer(struct timer_list *timer, unsigned int flags, + { + timer->entry.pprev = NULL; + timer->flags = flags | raw_smp_processor_id(); +-#ifdef CONFIG_TIMER_STATS +- timer->start_site = NULL; +- timer->start_pid = -1; +- memset(timer->start_comm, 0, TASK_COMM_LEN); +-#endif + lockdep_init_map(&timer->lockdep_map, name, key, 0); + } + +@@ -1001,8 +964,6 @@ __mod_timer(struct timer_list *timer, unsigned long expires, bool pending_only) + base = lock_timer_base(timer, &flags); + } + +- timer_stats_timer_set_start_info(timer); +- + ret = detach_if_pending(timer, base, false); + if (!ret && pending_only) + goto out_unlock; +@@ -1130,7 +1091,6 @@ void add_timer_on(struct timer_list *timer, int cpu) + struct timer_base *new_base, *base; + unsigned long flags; + +- timer_stats_timer_set_start_info(timer); + BUG_ON(timer_pending(timer) || !timer->function); + + new_base = get_timer_cpu_base(timer->flags, cpu); +@@ -1176,7 +1136,6 @@ int del_timer(struct timer_list *timer) + + debug_assert_init(timer); + +- timer_stats_timer_clear_start_info(timer); + if (timer_pending(timer)) { + base = lock_timer_base(timer, &flags); + ret = detach_if_pending(timer, base, true); +@@ -1204,10 +1163,9 @@ int try_to_del_timer_sync(struct timer_list *timer) + + base = lock_timer_base(timer, &flags); + +- if (base->running_timer != timer) { +- timer_stats_timer_clear_start_info(timer); ++ if (base->running_timer != timer) + ret = detach_if_pending(timer, base, true); +- } ++ + spin_unlock_irqrestore(&base->lock, flags); + + return ret; +@@ -1331,7 +1289,6 @@ static void expire_timers(struct timer_base *base, struct hlist_head *head) + unsigned long data; + + timer = hlist_entry(head->first, struct timer_list, entry); +- timer_stats_account_timer(timer); + + base->running_timer = timer; + detach_timer(timer, true); +@@ -1868,7 +1825,6 @@ static void __init init_timer_cpus(void) + void __init init_timers(void) + { + init_timer_cpus(); +- init_timer_stats(); + open_softirq(TIMER_SOFTIRQ, run_timer_softirq); + } + +diff --git a/kernel/time/timer_list.c b/kernel/time/timer_list.c +index afe6cd1..387a3a5 100644 +--- a/kernel/time/timer_list.c ++++ b/kernel/time/timer_list.c +@@ -62,21 +62,11 @@ static void + print_timer(struct seq_file *m, struct hrtimer *taddr, struct hrtimer *timer, + int idx, u64 now) + { +-#ifdef CONFIG_TIMER_STATS +- char tmp[TASK_COMM_LEN + 1]; +-#endif + SEQ_printf(m, " #%d: ", idx); + print_name_offset(m, taddr); + SEQ_printf(m, ", "); + print_name_offset(m, timer->function); + SEQ_printf(m, ", S:%02x", timer->state); +-#ifdef CONFIG_TIMER_STATS +- SEQ_printf(m, ", "); +- print_name_offset(m, timer->start_site); +- memcpy(tmp, timer->start_comm, TASK_COMM_LEN); +- tmp[TASK_COMM_LEN] = 0; +- SEQ_printf(m, ", %s/%d", tmp, timer->start_pid); +-#endif + SEQ_printf(m, "\n"); + SEQ_printf(m, " # expires at %Lu-%Lu nsecs [in %Ld to %Ld nsecs]\n", + (unsigned long long)ktime_to_ns(hrtimer_get_softexpires(timer)), +diff --git a/kernel/time/timer_stats.c b/kernel/time/timer_stats.c +deleted file mode 100644 +index afddded..0000000 +--- a/kernel/time/timer_stats.c ++++ /dev/null +@@ -1,425 +0,0 @@ +-/* +- * kernel/time/timer_stats.c +- * +- * Collect timer usage statistics. +- * +- * Copyright(C) 2006, Red Hat, Inc., Ingo Molnar +- * Copyright(C) 2006 Timesys Corp., Thomas Gleixner +- * +- * timer_stats is based on timer_top, a similar functionality which was part of +- * Con Kolivas dyntick patch set. It was developed by Daniel Petrini at the +- * Instituto Nokia de Tecnologia - INdT - Manaus. timer_top's design was based +- * on dynamic allocation of the statistics entries and linear search based +- * lookup combined with a global lock, rather than the static array, hash +- * and per-CPU locking which is used by timer_stats. It was written for the +- * pre hrtimer kernel code and therefore did not take hrtimers into account. +- * Nevertheless it provided the base for the timer_stats implementation and +- * was a helpful source of inspiration. Kudos to Daniel and the Nokia folks +- * for this effort. +- * +- * timer_top.c is +- * Copyright (C) 2005 Instituto Nokia de Tecnologia - INdT - Manaus +- * Written by Daniel Petrini +- * timer_top.c was released under the GNU General Public License version 2 +- * +- * We export the addresses and counting of timer functions being called, +- * the pid and cmdline from the owner process if applicable. +- * +- * Start/stop data collection: +- * # echo [1|0] >/proc/timer_stats +- * +- * Display the information collected so far: +- * # cat /proc/timer_stats +- * +- * This program is free software; you can redistribute it and/or modify +- * it under the terms of the GNU General Public License version 2 as +- * published by the Free Software Foundation. +- */ +- +-#include +-#include +-#include +-#include +-#include +-#include +- +-#include +- +-/* +- * This is our basic unit of interest: a timer expiry event identified +- * by the timer, its start/expire functions and the PID of the task that +- * started the timer. We count the number of times an event happens: +- */ +-struct entry { +- /* +- * Hash list: +- */ +- struct entry *next; +- +- /* +- * Hash keys: +- */ +- void *timer; +- void *start_func; +- void *expire_func; +- pid_t pid; +- +- /* +- * Number of timeout events: +- */ +- unsigned long count; +- u32 flags; +- +- /* +- * We save the command-line string to preserve +- * this information past task exit: +- */ +- char comm[TASK_COMM_LEN + 1]; +- +-} ____cacheline_aligned_in_smp; +- +-/* +- * Spinlock protecting the tables - not taken during lookup: +- */ +-static DEFINE_RAW_SPINLOCK(table_lock); +- +-/* +- * Per-CPU lookup locks for fast hash lookup: +- */ +-static DEFINE_PER_CPU(raw_spinlock_t, tstats_lookup_lock); +- +-/* +- * Mutex to serialize state changes with show-stats activities: +- */ +-static DEFINE_MUTEX(show_mutex); +- +-/* +- * Collection status, active/inactive: +- */ +-int __read_mostly timer_stats_active; +- +-/* +- * Beginning/end timestamps of measurement: +- */ +-static ktime_t time_start, time_stop; +- +-/* +- * tstat entry structs only get allocated while collection is +- * active and never freed during that time - this simplifies +- * things quite a bit. +- * +- * They get freed when a new collection period is started. +- */ +-#define MAX_ENTRIES_BITS 10 +-#define MAX_ENTRIES (1UL << MAX_ENTRIES_BITS) +- +-static unsigned long nr_entries; +-static struct entry entries[MAX_ENTRIES]; +- +-static atomic_t overflow_count; +- +-/* +- * The entries are in a hash-table, for fast lookup: +- */ +-#define TSTAT_HASH_BITS (MAX_ENTRIES_BITS - 1) +-#define TSTAT_HASH_SIZE (1UL << TSTAT_HASH_BITS) +-#define TSTAT_HASH_MASK (TSTAT_HASH_SIZE - 1) +- +-#define __tstat_hashfn(entry) \ +- (((unsigned long)(entry)->timer ^ \ +- (unsigned long)(entry)->start_func ^ \ +- (unsigned long)(entry)->expire_func ^ \ +- (unsigned long)(entry)->pid ) & TSTAT_HASH_MASK) +- +-#define tstat_hashentry(entry) (tstat_hash_table + __tstat_hashfn(entry)) +- +-static struct entry *tstat_hash_table[TSTAT_HASH_SIZE] __read_mostly; +- +-static void reset_entries(void) +-{ +- nr_entries = 0; +- memset(entries, 0, sizeof(entries)); +- memset(tstat_hash_table, 0, sizeof(tstat_hash_table)); +- atomic_set(&overflow_count, 0); +-} +- +-static struct entry *alloc_entry(void) +-{ +- if (nr_entries >= MAX_ENTRIES) +- return NULL; +- +- return entries + nr_entries++; +-} +- +-static int match_entries(struct entry *entry1, struct entry *entry2) +-{ +- return entry1->timer == entry2->timer && +- entry1->start_func == entry2->start_func && +- entry1->expire_func == entry2->expire_func && +- entry1->pid == entry2->pid; +-} +- +-/* +- * Look up whether an entry matching this item is present +- * in the hash already. Must be called with irqs off and the +- * lookup lock held: +- */ +-static struct entry *tstat_lookup(struct entry *entry, char *comm) +-{ +- struct entry **head, *curr, *prev; +- +- head = tstat_hashentry(entry); +- curr = *head; +- +- /* +- * The fastpath is when the entry is already hashed, +- * we do this with the lookup lock held, but with the +- * table lock not held: +- */ +- while (curr) { +- if (match_entries(curr, entry)) +- return curr; +- +- curr = curr->next; +- } +- /* +- * Slowpath: allocate, set up and link a new hash entry: +- */ +- prev = NULL; +- curr = *head; +- +- raw_spin_lock(&table_lock); +- /* +- * Make sure we have not raced with another CPU: +- */ +- while (curr) { +- if (match_entries(curr, entry)) +- goto out_unlock; +- +- prev = curr; +- curr = curr->next; +- } +- +- curr = alloc_entry(); +- if (curr) { +- *curr = *entry; +- curr->count = 0; +- curr->next = NULL; +- memcpy(curr->comm, comm, TASK_COMM_LEN); +- +- smp_mb(); /* Ensure that curr is initialized before insert */ +- +- if (prev) +- prev->next = curr; +- else +- *head = curr; +- } +- out_unlock: +- raw_spin_unlock(&table_lock); +- +- return curr; +-} +- +-/** +- * timer_stats_update_stats - Update the statistics for a timer. +- * @timer: pointer to either a timer_list or a hrtimer +- * @pid: the pid of the task which set up the timer +- * @startf: pointer to the function which did the timer setup +- * @timerf: pointer to the timer callback function of the timer +- * @comm: name of the process which set up the timer +- * @tflags: The flags field of the timer +- * +- * When the timer is already registered, then the event counter is +- * incremented. Otherwise the timer is registered in a free slot. +- */ +-void timer_stats_update_stats(void *timer, pid_t pid, void *startf, +- void *timerf, char *comm, u32 tflags) +-{ +- /* +- * It doesn't matter which lock we take: +- */ +- raw_spinlock_t *lock; +- struct entry *entry, input; +- unsigned long flags; +- +- if (likely(!timer_stats_active)) +- return; +- +- lock = &per_cpu(tstats_lookup_lock, raw_smp_processor_id()); +- +- input.timer = timer; +- input.start_func = startf; +- input.expire_func = timerf; +- input.pid = pid; +- input.flags = tflags; +- +- raw_spin_lock_irqsave(lock, flags); +- if (!timer_stats_active) +- goto out_unlock; +- +- entry = tstat_lookup(&input, comm); +- if (likely(entry)) +- entry->count++; +- else +- atomic_inc(&overflow_count); +- +- out_unlock: +- raw_spin_unlock_irqrestore(lock, flags); +-} +- +-static void print_name_offset(struct seq_file *m, unsigned long addr) +-{ +- char symname[KSYM_NAME_LEN]; +- +- if (lookup_symbol_name(addr, symname) < 0) +- seq_printf(m, "<%p>", (void *)addr); +- else +- seq_printf(m, "%s", symname); +-} +- +-static int tstats_show(struct seq_file *m, void *v) +-{ +- struct timespec64 period; +- struct entry *entry; +- unsigned long ms; +- long events = 0; +- ktime_t time; +- int i; +- +- mutex_lock(&show_mutex); +- /* +- * If still active then calculate up to now: +- */ +- if (timer_stats_active) +- time_stop = ktime_get(); +- +- time = ktime_sub(time_stop, time_start); +- +- period = ktime_to_timespec64(time); +- ms = period.tv_nsec / 1000000; +- +- seq_puts(m, "Timer Stats Version: v0.3\n"); +- seq_printf(m, "Sample period: %ld.%03ld s\n", (long)period.tv_sec, ms); +- if (atomic_read(&overflow_count)) +- seq_printf(m, "Overflow: %d entries\n", atomic_read(&overflow_count)); +- seq_printf(m, "Collection: %s\n", timer_stats_active ? "active" : "inactive"); +- +- for (i = 0; i < nr_entries; i++) { +- entry = entries + i; +- if (entry->flags & TIMER_DEFERRABLE) { +- seq_printf(m, "%4luD, %5d %-16s ", +- entry->count, entry->pid, entry->comm); +- } else { +- seq_printf(m, " %4lu, %5d %-16s ", +- entry->count, entry->pid, entry->comm); +- } +- +- print_name_offset(m, (unsigned long)entry->start_func); +- seq_puts(m, " ("); +- print_name_offset(m, (unsigned long)entry->expire_func); +- seq_puts(m, ")\n"); +- +- events += entry->count; +- } +- +- ms += period.tv_sec * 1000; +- if (!ms) +- ms = 1; +- +- if (events && period.tv_sec) +- seq_printf(m, "%ld total events, %ld.%03ld events/sec\n", +- events, events * 1000 / ms, +- (events * 1000000 / ms) % 1000); +- else +- seq_printf(m, "%ld total events\n", events); +- +- mutex_unlock(&show_mutex); +- +- return 0; +-} +- +-/* +- * After a state change, make sure all concurrent lookup/update +- * activities have stopped: +- */ +-static void sync_access(void) +-{ +- unsigned long flags; +- int cpu; +- +- for_each_online_cpu(cpu) { +- raw_spinlock_t *lock = &per_cpu(tstats_lookup_lock, cpu); +- +- raw_spin_lock_irqsave(lock, flags); +- /* nothing */ +- raw_spin_unlock_irqrestore(lock, flags); +- } +-} +- +-static ssize_t tstats_write(struct file *file, const char __user *buf, +- size_t count, loff_t *offs) +-{ +- char ctl[2]; +- +- if (count != 2 || *offs) +- return -EINVAL; +- +- if (copy_from_user(ctl, buf, count)) +- return -EFAULT; +- +- mutex_lock(&show_mutex); +- switch (ctl[0]) { +- case '0': +- if (timer_stats_active) { +- timer_stats_active = 0; +- time_stop = ktime_get(); +- sync_access(); +- } +- break; +- case '1': +- if (!timer_stats_active) { +- reset_entries(); +- time_start = ktime_get(); +- smp_mb(); +- timer_stats_active = 1; +- } +- break; +- default: +- count = -EINVAL; +- } +- mutex_unlock(&show_mutex); +- +- return count; +-} +- +-static int tstats_open(struct inode *inode, struct file *filp) +-{ +- return single_open(filp, tstats_show, NULL); +-} +- +-static const struct file_operations tstats_fops = { +- .open = tstats_open, +- .read = seq_read, +- .write = tstats_write, +- .llseek = seq_lseek, +- .release = single_release, +-}; +- +-void __init init_timer_stats(void) +-{ +- int cpu; +- +- for_each_possible_cpu(cpu) +- raw_spin_lock_init(&per_cpu(tstats_lookup_lock, cpu)); +-} +- +-static int __init init_tstats_procfs(void) +-{ +- struct proc_dir_entry *pe; +- +- pe = proc_create("timer_stats", 0644, NULL, &tstats_fops); +- if (!pe) +- return -ENOMEM; +- return 0; +-} +-__initcall(init_tstats_procfs); +diff --git a/kernel/workqueue.c b/kernel/workqueue.c +index 1d9fb65..072cbc9 100644 +--- a/kernel/workqueue.c ++++ b/kernel/workqueue.c +@@ -1523,8 +1523,6 @@ static void __queue_delayed_work(int cpu, struct workqueue_struct *wq, + return; + } + +- timer_stats_timer_set_start_info(&dwork->timer); +- + dwork->wq = wq; + dwork->cpu = cpu; + timer->expires = jiffies + delay; +diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug +index eb9e9a7..132af33 100644 +--- a/lib/Kconfig.debug ++++ b/lib/Kconfig.debug +@@ -980,20 +980,6 @@ config DEBUG_TIMEKEEPING + + If unsure, say N. + +-config TIMER_STATS +- bool "Collect kernel timers statistics" +- depends on DEBUG_KERNEL && PROC_FS +- help +- If you say Y here, additional code will be inserted into the +- timer routines to collect statistics about kernel timers being +- reprogrammed. The statistics can be read from /proc/timer_stats. +- The statistics collection is started by writing 1 to /proc/timer_stats, +- writing 0 stops it. This feature is useful to collect information +- about timer usage patterns in kernel and userspace. This feature +- is lightweight if enabled in the kernel config but not activated +- (it defaults to deactivated on bootup and will only be activated +- if some application like powertop activates it explicitly). +- + config DEBUG_PREEMPT + bool "Debug preemptible kernel" + depends on DEBUG_KERNEL && PREEMPT && TRACE_IRQFLAGS_SUPPORT +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2017-5986/ANY/0.patch b/Patches/Linux_CVEs/CVE-2017-5986/ANY/0.patch index e69de29b..8dfe30ea 100644 --- a/Patches/Linux_CVEs/CVE-2017-5986/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2017-5986/ANY/0.patch @@ -0,0 +1,39 @@ +From 2dcab598484185dea7ec22219c76dcdd59e3cb90 Mon Sep 17 00:00:00 2001 +From: Marcelo Ricardo Leitner +Date: Mon, 6 Feb 2017 18:10:31 -0200 +Subject: sctp: avoid BUG_ON on sctp_wait_for_sndbuf + +Alexander Popov reported that an application may trigger a BUG_ON in +sctp_wait_for_sndbuf if the socket tx buffer is full, a thread is +waiting on it to queue more data and meanwhile another thread peels off +the association being used by the first thread. + +This patch replaces the BUG_ON call with a proper error handling. It +will return -EPIPE to the original sendmsg call, similarly to what would +have been done if the association wasn't found in the first place. + +Acked-by: Alexander Popov +Signed-off-by: Marcelo Ricardo Leitner +Reviewed-by: Xin Long +Signed-off-by: David S. Miller +--- + net/sctp/socket.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +diff --git a/net/sctp/socket.c b/net/sctp/socket.c +index 37eeab7..e214d2e 100644 +--- a/net/sctp/socket.c ++++ b/net/sctp/socket.c +@@ -7426,7 +7426,8 @@ static int sctp_wait_for_sndbuf(struct sctp_association *asoc, long *timeo_p, + */ + release_sock(sk); + current_timeo = schedule_timeout(current_timeo); +- BUG_ON(sk != asoc->base.sk); ++ if (sk != asoc->base.sk) ++ goto do_error; + lock_sock(sk); + + *timeo_p = current_timeo; +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2017-6001/3.4/0.patch b/Patches/Linux_CVEs/CVE-2017-6001/3.4/0.patch index e69de29b..a9442471 100644 --- a/Patches/Linux_CVEs/CVE-2017-6001/3.4/0.patch +++ b/Patches/Linux_CVEs/CVE-2017-6001/3.4/0.patch @@ -0,0 +1,159 @@ +From 9eb0e01be831d0f37ea6278a92c32424141f55fb Mon Sep 17 00:00:00 2001 +From: Peter Zijlstra +Date: Wed, 11 Jan 2017 21:09:50 +0100 +Subject: perf/core: Fix concurrent sys_perf_event_open() vs. 'move_group' race + +commit 321027c1fe77f892f4ea07846aeae08cefbbb290 upstream. + +Di Shen reported a race between two concurrent sys_perf_event_open() +calls where both try and move the same pre-existing software group +into a hardware context. + +The problem is exactly that described in commit: + + f63a8daa5812 ("perf: Fix event->ctx locking") + +... where, while we wait for a ctx->mutex acquisition, the event->ctx +relation can have changed under us. + +That very same commit failed to recognise sys_perf_event_context() as an +external access vector to the events and thereby didn't apply the +established locking rules correctly. + +So while one sys_perf_event_open() call is stuck waiting on +mutex_lock_double(), the other (which owns said locks) moves the group +about. So by the time the former sys_perf_event_open() acquires the +locks, the context we've acquired is stale (and possibly dead). + +Apply the established locking rules as per perf_event_ctx_lock_nested() +to the mutex_lock_double() for the 'move_group' case. This obviously means +we need to validate state after we acquire the locks. + +Reported-by: Di Shen (Keen Lab) +Tested-by: John Dias +Signed-off-by: Peter Zijlstra (Intel) +Cc: Alexander Shishkin +Cc: Arnaldo Carvalho de Melo +Cc: Arnaldo Carvalho de Melo +Cc: Jiri Olsa +Cc: Kees Cook +Cc: Linus Torvalds +Cc: Min Chong +Cc: Peter Zijlstra +Cc: Stephane Eranian +Cc: Thomas Gleixner +Cc: Vince Weaver +Fixes: f63a8daa5812 ("perf: Fix event->ctx locking") +Link: http://lkml.kernel.org/r/20170106131444.GZ3174@twins.programming.kicks-ass.net +Signed-off-by: Ingo Molnar +[bwh: Backported to 3.2: + - Use ACCESS_ONCE() instead of READ_ONCE() + - Test perf_event::group_flags instead of group_caps + - Add the err_locked cleanup block, which we didn't need before + - Adjust context] +Signed-off-by: Ben Hutchings +--- + kernel/events/core.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++++---- + 1 file changed, 57 insertions(+), 4 deletions(-) + +diff --git a/kernel/events/core.c b/kernel/events/core.c +index a301c68..49a1db4 100644 +--- a/kernel/events/core.c ++++ b/kernel/events/core.c +@@ -6474,6 +6474,37 @@ static void mutex_lock_double(struct mutex *a, struct mutex *b) + mutex_lock_nested(b, SINGLE_DEPTH_NESTING); + } + ++/* ++ * Variation on perf_event_ctx_lock_nested(), except we take two context ++ * mutexes. ++ */ ++static struct perf_event_context * ++__perf_event_ctx_lock_double(struct perf_event *group_leader, ++ struct perf_event_context *ctx) ++{ ++ struct perf_event_context *gctx; ++ ++again: ++ rcu_read_lock(); ++ gctx = ACCESS_ONCE(group_leader->ctx); ++ if (!atomic_inc_not_zero(&gctx->refcount)) { ++ rcu_read_unlock(); ++ goto again; ++ } ++ rcu_read_unlock(); ++ ++ mutex_lock_double(&gctx->mutex, &ctx->mutex); ++ ++ if (group_leader->ctx != gctx) { ++ mutex_unlock(&ctx->mutex); ++ mutex_unlock(&gctx->mutex); ++ put_ctx(gctx); ++ goto again; ++ } ++ ++ return gctx; ++} ++ + /** + * sys_perf_event_open - open a performance event, associate it to a task/cpu + * +@@ -6661,14 +6692,31 @@ SYSCALL_DEFINE5(perf_event_open, + } + + if (move_group) { +- gctx = group_leader->ctx; ++ gctx = __perf_event_ctx_lock_double(group_leader, ctx); ++ ++ /* ++ * Check if we raced against another sys_perf_event_open() call ++ * moving the software group underneath us. ++ */ ++ if (!(group_leader->group_flags & PERF_GROUP_SOFTWARE)) { ++ /* ++ * If someone moved the group out from under us, check ++ * if this new event wound up on the same ctx, if so ++ * its the regular !move_group case, otherwise fail. ++ */ ++ if (gctx != ctx) { ++ err = -EINVAL; ++ goto err_locked; ++ } else { ++ perf_event_ctx_unlock(group_leader, gctx); ++ move_group = 0; ++ } ++ } + + /* + * See perf_event_ctx_lock() for comments on the details + * of swizzling perf_event::ctx. + */ +- mutex_lock_double(&gctx->mutex, &ctx->mutex); +- + perf_remove_from_context(group_leader, false); + + /* +@@ -6710,7 +6758,7 @@ SYSCALL_DEFINE5(perf_event_open, + perf_unpin_context(ctx); + + if (move_group) { +- mutex_unlock(&gctx->mutex); ++ perf_event_ctx_unlock(group_leader, gctx); + put_ctx(gctx); + } + mutex_unlock(&ctx->mutex); +@@ -6737,6 +6785,11 @@ SYSCALL_DEFINE5(perf_event_open, + fd_install(event_fd, event_file); + return event_fd; + ++err_locked: ++ if (move_group) ++ perf_event_ctx_unlock(group_leader, gctx); ++ mutex_unlock(&ctx->mutex); ++ fput(event_file); + err_context: + perf_unpin_context(ctx); + put_ctx(ctx); +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2017-6345/ANY/0.patch b/Patches/Linux_CVEs/CVE-2017-6345/ANY/0.patch index e69de29b..57b5bbc9 100644 --- a/Patches/Linux_CVEs/CVE-2017-6345/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2017-6345/ANY/0.patch @@ -0,0 +1,58 @@ +From 8b74d439e1697110c5e5c600643e823eb1dd0762 Mon Sep 17 00:00:00 2001 +From: Eric Dumazet +Date: Sun, 12 Feb 2017 14:03:52 -0800 +Subject: net/llc: avoid BUG_ON() in skb_orphan() + +It seems nobody used LLC since linux-3.12. + +Fortunately fuzzers like syzkaller still know how to run this code, +otherwise it would be no fun. + +Setting skb->sk without skb->destructor leads to all kinds of +bugs, we now prefer to be very strict about it. + +Ideally here we would use skb_set_owner() but this helper does not exist yet, +only CAN seems to have a private helper for that. + +Fixes: 376c7311bdb6 ("net: add a temporary sanity check in skb_orphan()") +Signed-off-by: Eric Dumazet +Reported-by: Andrey Konovalov +Signed-off-by: David S. Miller +--- + net/llc/llc_conn.c | 3 +++ + net/llc/llc_sap.c | 3 +++ + 2 files changed, 6 insertions(+) + +diff --git a/net/llc/llc_conn.c b/net/llc/llc_conn.c +index 3e821da..8bc5a1b 100644 +--- a/net/llc/llc_conn.c ++++ b/net/llc/llc_conn.c +@@ -821,7 +821,10 @@ void llc_conn_handler(struct llc_sap *sap, struct sk_buff *skb) + * another trick required to cope with how the PROCOM state + * machine works. -acme + */ ++ skb_orphan(skb); ++ sock_hold(sk); + skb->sk = sk; ++ skb->destructor = sock_efree; + } + if (!sock_owned_by_user(sk)) + llc_conn_rcv(sk, skb); +diff --git a/net/llc/llc_sap.c b/net/llc/llc_sap.c +index d0e1e80..5404d0d 100644 +--- a/net/llc/llc_sap.c ++++ b/net/llc/llc_sap.c +@@ -290,7 +290,10 @@ static void llc_sap_rcv(struct llc_sap *sap, struct sk_buff *skb, + + ev->type = LLC_SAP_EV_TYPE_PDU; + ev->reason = 0; ++ skb_orphan(skb); ++ sock_hold(sk); + skb->sk = sk; ++ skb->destructor = sock_efree; + llc_sap_state_process(sap, skb); + } + +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2017-6346/ANY/0.patch b/Patches/Linux_CVEs/CVE-2017-6346/ANY/0.patch index e69de29b..c92194d5 100644 --- a/Patches/Linux_CVEs/CVE-2017-6346/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2017-6346/ANY/0.patch @@ -0,0 +1,126 @@ +From d199fab63c11998a602205f7ee7ff7c05c97164b Mon Sep 17 00:00:00 2001 +From: Eric Dumazet +Date: Tue, 14 Feb 2017 09:03:51 -0800 +Subject: packet: fix races in fanout_add() + +Multiple threads can call fanout_add() at the same time. + +We need to grab fanout_mutex earlier to avoid races that could +lead to one thread freeing po->rollover that was set by another thread. + +Do the same in fanout_release(), for peace of mind, and to help us +finding lockdep issues earlier. + +Fixes: dc99f600698d ("packet: Add fanout support.") +Fixes: 0648ab70afe6 ("packet: rollover prepare: per-socket state") +Signed-off-by: Eric Dumazet +Cc: Willem de Bruijn +Signed-off-by: David S. Miller +--- + net/packet/af_packet.c | 55 +++++++++++++++++++++++++++----------------------- + 1 file changed, 30 insertions(+), 25 deletions(-) + +diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c +index d56ee46..0f03f6a 100644 +--- a/net/packet/af_packet.c ++++ b/net/packet/af_packet.c +@@ -1619,6 +1619,7 @@ static void fanout_release_data(struct packet_fanout *f) + + static int fanout_add(struct sock *sk, u16 id, u16 type_flags) + { ++ struct packet_rollover *rollover = NULL; + struct packet_sock *po = pkt_sk(sk); + struct packet_fanout *f, *match; + u8 type = type_flags & 0xff; +@@ -1641,23 +1642,28 @@ static int fanout_add(struct sock *sk, u16 id, u16 type_flags) + return -EINVAL; + } + ++ mutex_lock(&fanout_mutex); ++ ++ err = -EINVAL; + if (!po->running) +- return -EINVAL; ++ goto out; + ++ err = -EALREADY; + if (po->fanout) +- return -EALREADY; ++ goto out; + + if (type == PACKET_FANOUT_ROLLOVER || + (type_flags & PACKET_FANOUT_FLAG_ROLLOVER)) { +- po->rollover = kzalloc(sizeof(*po->rollover), GFP_KERNEL); +- if (!po->rollover) +- return -ENOMEM; +- atomic_long_set(&po->rollover->num, 0); +- atomic_long_set(&po->rollover->num_huge, 0); +- atomic_long_set(&po->rollover->num_failed, 0); ++ err = -ENOMEM; ++ rollover = kzalloc(sizeof(*rollover), GFP_KERNEL); ++ if (!rollover) ++ goto out; ++ atomic_long_set(&rollover->num, 0); ++ atomic_long_set(&rollover->num_huge, 0); ++ atomic_long_set(&rollover->num_failed, 0); ++ po->rollover = rollover; + } + +- mutex_lock(&fanout_mutex); + match = NULL; + list_for_each_entry(f, &fanout_list, list) { + if (f->id == id && +@@ -1704,11 +1710,11 @@ static int fanout_add(struct sock *sk, u16 id, u16 type_flags) + } + } + out: +- mutex_unlock(&fanout_mutex); +- if (err) { +- kfree(po->rollover); ++ if (err && rollover) { ++ kfree(rollover); + po->rollover = NULL; + } ++ mutex_unlock(&fanout_mutex); + return err; + } + +@@ -1717,23 +1723,22 @@ static void fanout_release(struct sock *sk) + struct packet_sock *po = pkt_sk(sk); + struct packet_fanout *f; + +- f = po->fanout; +- if (!f) +- return; +- + mutex_lock(&fanout_mutex); +- po->fanout = NULL; ++ f = po->fanout; ++ if (f) { ++ po->fanout = NULL; ++ ++ if (atomic_dec_and_test(&f->sk_ref)) { ++ list_del(&f->list); ++ dev_remove_pack(&f->prot_hook); ++ fanout_release_data(f); ++ kfree(f); ++ } + +- if (atomic_dec_and_test(&f->sk_ref)) { +- list_del(&f->list); +- dev_remove_pack(&f->prot_hook); +- fanout_release_data(f); +- kfree(f); ++ if (po->rollover) ++ kfree_rcu(po->rollover, rcu); + } + mutex_unlock(&fanout_mutex); +- +- if (po->rollover) +- kfree_rcu(po->rollover, rcu); + } + + static bool packet_extra_vlan_len_allowed(const struct net_device *dev, +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2017-7187/ANY/0.patch b/Patches/Linux_CVEs/CVE-2017-7187/ANY/0.patch index 3a7e58c6..a68b08dd 100644 --- a/Patches/Linux_CVEs/CVE-2017-7187/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2017-7187/ANY/0.patch @@ -48,7 +48,7 @@ Acked-by: Douglas Gilbert <dgilbert@interlog.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-rw-r--r--drivers/scsi/sg.c2
1 files changed, 2 insertions, 0 deletions
diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
index e831e01..849ff810 100644
--- a/drivers/scsi/sg.c
+++ b/drivers/scsi/sg.c
@@ -996,6 +996,8 @@ sg_ioctl(struct file *filp, unsigned int cmd_in, unsigned long arg)
result = get_user(val, ip);
if (result)
return result;
+ if (val > SG_MAX_CDB_SIZE)
+ return -ENOMEM;
sfp->next_cmd_len = (val > 0) ? val : 0;
return 0;
case SG_GET_VERSION_NUM:
- + diff --git a/Patches/Linux_CVEs/CVE-2017-9075/ANY/0.patch b/Patches/Linux_CVEs/CVE-2017-9075/ANY/0.patch index e69de29b..238ab02f 100644 --- a/Patches/Linux_CVEs/CVE-2017-9075/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2017-9075/ANY/0.patch @@ -0,0 +1,33 @@ +From fdcee2cbb8438702ea1b328fb6e0ac5e9a40c7f8 Mon Sep 17 00:00:00 2001 +From: Eric Dumazet +Date: Wed, 17 May 2017 07:16:40 -0700 +Subject: sctp: do not inherit ipv6_{mc|ac|fl}_list from parent + +SCTP needs fixes similar to 83eaddab4378 ("ipv6/dccp: do not inherit +ipv6_mc_list from parent"), otherwise bad things can happen. + +Signed-off-by: Eric Dumazet +Reported-by: Andrey Konovalov +Tested-by: Andrey Konovalov +Signed-off-by: David S. Miller +--- + net/sctp/ipv6.c | 3 +++ + 1 file changed, 3 insertions(+) + +diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c +index 142b70e..f5b45b8 100644 +--- a/net/sctp/ipv6.c ++++ b/net/sctp/ipv6.c +@@ -677,6 +677,9 @@ static struct sock *sctp_v6_create_accept_sk(struct sock *sk, + newnp = inet6_sk(newsk); + + memcpy(newnp, np, sizeof(struct ipv6_pinfo)); ++ newnp->ipv6_mc_list = NULL; ++ newnp->ipv6_ac_list = NULL; ++ newnp->ipv6_fl_list = NULL; + + rcu_read_lock(); + opt = rcu_dereference(np->opt); +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/CVE-2017-9076/ANY/0.patch b/Patches/Linux_CVEs/CVE-2017-9076/ANY/0.patch index e69de29b..828a8905 100644 --- a/Patches/Linux_CVEs/CVE-2017-9076/ANY/0.patch +++ b/Patches/Linux_CVEs/CVE-2017-9076/ANY/0.patch @@ -0,0 +1,64 @@ +From 83eaddab4378db256d00d295bda6ca997cd13a52 Mon Sep 17 00:00:00 2001 +From: WANG Cong +Date: Tue, 9 May 2017 16:59:54 -0700 +Subject: ipv6/dccp: do not inherit ipv6_mc_list from parent + +Like commit 657831ffc38e ("dccp/tcp: do not inherit mc_list from parent") +we should clear ipv6_mc_list etc. for IPv6 sockets too. + +Cc: Eric Dumazet +Signed-off-by: Cong Wang +Acked-by: Eric Dumazet +Signed-off-by: David S. Miller +--- + net/dccp/ipv6.c | 6 ++++++ + net/ipv6/tcp_ipv6.c | 2 ++ + 2 files changed, 8 insertions(+) + +diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c +index d9b6a4e..b6bbb71 100644 +--- a/net/dccp/ipv6.c ++++ b/net/dccp/ipv6.c +@@ -426,6 +426,9 @@ static struct sock *dccp_v6_request_recv_sock(const struct sock *sk, + newsk->sk_backlog_rcv = dccp_v4_do_rcv; + newnp->pktoptions = NULL; + newnp->opt = NULL; ++ newnp->ipv6_mc_list = NULL; ++ newnp->ipv6_ac_list = NULL; ++ newnp->ipv6_fl_list = NULL; + newnp->mcast_oif = inet6_iif(skb); + newnp->mcast_hops = ipv6_hdr(skb)->hop_limit; + +@@ -490,6 +493,9 @@ static struct sock *dccp_v6_request_recv_sock(const struct sock *sk, + /* Clone RX bits */ + newnp->rxopt.all = np->rxopt.all; + ++ newnp->ipv6_mc_list = NULL; ++ newnp->ipv6_ac_list = NULL; ++ newnp->ipv6_fl_list = NULL; + newnp->pktoptions = NULL; + newnp->opt = NULL; + newnp->mcast_oif = inet6_iif(skb); +diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c +index aeb9497..df5a9ff 100644 +--- a/net/ipv6/tcp_ipv6.c ++++ b/net/ipv6/tcp_ipv6.c +@@ -1062,6 +1062,7 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff * + newtp->af_specific = &tcp_sock_ipv6_mapped_specific; + #endif + ++ newnp->ipv6_mc_list = NULL; + newnp->ipv6_ac_list = NULL; + newnp->ipv6_fl_list = NULL; + newnp->pktoptions = NULL; +@@ -1131,6 +1132,7 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff * + First: no IPv4 options. + */ + newinet->inet_opt = NULL; ++ newnp->ipv6_mc_list = NULL; + newnp->ipv6_ac_list = NULL; + newnp->ipv6_fl_list = NULL; + +-- +cgit v1.1 + diff --git a/Patches/Linux_CVEs/Fix.sh b/Patches/Linux_CVEs/Fix.sh new file mode 100644 index 00000000..6ad70c74 --- /dev/null +++ b/Patches/Linux_CVEs/Fix.sh @@ -0,0 +1,14 @@ +#!/bin/bash +mv CVE-2016-0819/ANY/0.patch CVE-2016-0819/ANY/0.patch.disabled +mv CVE-2016-2185/ANY/1.patch CVE-2016-2185/ANY/1.patch.dupe +mv CVE-2016-2186/ANY/1.patch CVE-2016-2186/ANY/1.patch.dupe +mv CVE-2016-2187/ANY/1.patch CVE-2016-2187/ANY/1.patch.dupe +mv CVE-2016-3136/ANY/1.patch CVE-2016-3136/ANY/1.patch.dupe +mv CVE-2016-3138/ANY/1.patch CVE-2016-3138/ANY/1.patch.dupe +mv CVE-2016-3140/ANY/1.patch CVE-2016-3140/ANY/1.patch.dupe +mv CVE-2016-3689/ANY/1.patch CVE-2016-3689/ANY/1.patch.dupe +mv CVE-2017-0452/ANY/0.patch CVE-2017-0452/ANY/0.patch.dupe +mv CVE-2017-0794/3.10/0.patch CVE-2017-0794/3.10/0.patch.disabled +mv CVE-2017-5669/ANY/1.patch CVE-2017-5669/ANY/1.patch.dupe +mv CVE-2017-6074/ANY/1.patch CVE-2017-6074/ANY/1.patch.dupe +mv CVE-2017-7371/ANY/1.patch CVE-2017-7371/ANY/1.patch.dupe diff --git a/Scripts/LineageOS-14.1/00init.sh b/Scripts/LineageOS-14.1/00init.sh index f76470cc..e9edf567 100644 --- a/Scripts/LineageOS-14.1/00init.sh +++ b/Scripts/LineageOS-14.1/00init.sh @@ -1,4 +1,5 @@ #!/bin/bash +#Copyright (c) 2017 Spot Communications, Inc. #Sets settings used by all other scripts diff --git a/Scripts/LineageOS-14.1/Optimize.sh b/Scripts/LineageOS-14.1/Optimize.sh index 417f27c4..d6522337 100644 --- a/Scripts/LineageOS-14.1/Optimize.sh +++ b/Scripts/LineageOS-14.1/Optimize.sh @@ -1,5 +1,5 @@ #!/bin/bash -#Copyright (c) 2015-2017 Spot Communications, Inc. +#Copyright (c) 2017 Spot Communications, Inc. #Attempts to increase performance and battery life diff --git a/Scripts/LineageOS-14.1/Patch.sh b/Scripts/LineageOS-14.1/Patch.sh index d3f5e268..6f244334 100755 --- a/Scripts/LineageOS-14.1/Patch.sh +++ b/Scripts/LineageOS-14.1/Patch.sh @@ -81,8 +81,8 @@ enableZram() { enabledForcedEncryption() { cd $base$1; - sed -i 's|encryptable=/|forceencrypt,encryptable=/|' fstab.* rootdir/fstab.* rootdir/etc/fstab.* || true; - echo "Enabled forceencrypt"; + sed -i 's|encryptable=/|forceencrypt=/|' fstab.* rootdir/fstab.* rootdir/etc/fstab.* || true; + echo "Enabled forceencrypt for $1"; cd $base; } export -f enabledForcedEncryption; diff --git a/Scripts/LineageOS-14.1/Patch_CVE.sh b/Scripts/LineageOS-14.1/Patch_CVE.sh index 5adc5081..0a4b4e95 100644 --- a/Scripts/LineageOS-14.1/Patch_CVE.sh +++ b/Scripts/LineageOS-14.1/Patch_CVE.sh @@ -1,5 +1,5 @@ #!/bin/bash -#Copyright (c) 2015-2017 Spot Communications, Inc. +#Copyright (c) 2017 Spot Communications, Inc. #Attempts to patch kernels to be more secure diff --git a/Scripts/LineageOS-14.1/Rebrand.sh b/Scripts/LineageOS-14.1/Rebrand.sh index ce9eb2c1..49a89191 100644 --- a/Scripts/LineageOS-14.1/Rebrand.sh +++ b/Scripts/LineageOS-14.1/Rebrand.sh @@ -1,5 +1,5 @@ #!/bin/bash -#Copyright (c) 2015-2017 Spot Communications, Inc. +#Copyright (c) 2017 Spot Communications, Inc. #Updates select user facing strings diff --git a/Scripts/LineageOS-14.1/Theme.sh b/Scripts/LineageOS-14.1/Theme.sh index 39c562b3..36be86a2 100644 --- a/Scripts/LineageOS-14.1/Theme.sh +++ b/Scripts/LineageOS-14.1/Theme.sh @@ -1,5 +1,5 @@ #!/bin/bash -#Copyright (c) 2015-2017 Spot Communications, Inc. +#Copyright (c) 2017 Spot Communications, Inc. #Replaces teal accents with orange/yellow ones