mirror of
https://kernel.googlesource.com/pub/scm/linux/kernel/git/stable/linux-stable.git
synced 2025-09-25 16:49:33 +10:00
Including fixes from bluetooth and wireless.
Current release - regressions: - af_unix: allow passing cred for embryo without SO_PASSCRED/SO_PASSPIDFD Current release - new code bugs: - eth: airoha: correct enable mask for RX queues 16-31 - veth: prevent NULL pointer dereference in veth_xdp_rcv when peer disappears under traffic - ipv6: move fib6_config_validate() to ip6_route_add(), prevent invalid routes Previous releases - regressions: - phy: phy_caps: don't skip better duplex match on non-exact match - dsa: b53: fix untagged traffic sent via cpu tagged with VID 0 - Revert "wifi: mwifiex: Fix HT40 bandwidth issue.", it caused transient packet loss, exact reason not fully understood, yet Previous releases - always broken: - net: clear the dst when BPF is changing skb protocol (IPv4 <> IPv6) - sched: sfq: fix a potential crash on gso_skb handling - Bluetooth: intel: improve rx buffer posting to avoid causing issues in the firmware - eth: intel: i40e: make reset handling robust against multiple requests - eth: mlx5: ensure FW pages are always allocated on the local NUMA node, even when device is configure to 'serve' another node - wifi: ath12k: fix GCC_GCC_PCIE_HOT_RST definition for WCN7850, prevent kernel crashes - wifi: ath11k: avoid burning CPU in ath11k_debugfs_fw_stats_request() for 3 sec if fw_stats_done is not set Signed-off-by: Jakub Kicinski <kuba@kernel.org> -----BEGIN PGP SIGNATURE----- iQIzBAABCgAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmhK/3IACgkQMUZtbf5S IruE5A//RdwiBW/pqoMIiRKLA3HZeUA/beYOl4DwVf8WFQNUIqdboeAi6k4yFrS+ SykKN0s1z8fW45lA46iFv3sR0QKYGln/v/cANsqojYqKBD3PF42dRifFlEAIz2M5 fnXK1VHPJOFK/OBOyKiiW3R6mFv+v9epZM8BKED77vFy7osDV2zkObePeE8/34B7 yVAr6JNTpB5Ex4ziG+e/6tFF6IX9RJLBl4fkRRynLDSsb1NFuy39LxPsxRQPxnzo tlfHfxEFl5qDNGondUoSxmp38HoO6MRofWp1d1GZoBbTXi0gXV26I5WaaBHBqPkm jZ7AtIMQq2+JuEg0y4dFFRehZLwLEMuhvlbacbIOKNBngVIsploBzvbG3ntWuUa4 Z5VFayQXumsHB5g7+vEFK6vCPaIpatKt419JsFXogNvVmmQzghALFlSymm/WbyGL Bj3R448xGDJw+2zDAXSH/nMMHkRaQd2Ptj2czvJ0Y7Fj8bxJgH0okaHOBrk9RQTQ bdUGCiMY84p6WI7rKDkFyyohMxppdYsY8A9hSdGgpqvu7dZi5yGmzz1Sp9+uSfSF Lj61am4LSvRsIuTP5cdqmTBn3mZS5R49hvJsFddgXRhF+Y9gB7LSm0sypZbuOEKD m9ijKcNETglzer0iMCwAVrIbDHGjqqHS74DkRzsuPsQ8kaCjsno= =0mtm -----END PGP SIGNATURE----- Merge tag 'net-6.16-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Including fixes from bluetooth and wireless. Current release - regressions: - af_unix: allow passing cred for embryo without SO_PASSCRED/SO_PASSPIDFD Current release - new code bugs: - eth: airoha: correct enable mask for RX queues 16-31 - veth: prevent NULL pointer dereference in veth_xdp_rcv when peer disappears under traffic - ipv6: move fib6_config_validate() to ip6_route_add(), prevent invalid routes Previous releases - regressions: - phy: phy_caps: don't skip better duplex match on non-exact match - dsa: b53: fix untagged traffic sent via cpu tagged with VID 0 - Revert "wifi: mwifiex: Fix HT40 bandwidth issue.", it caused transient packet loss, exact reason not fully understood, yet Previous releases - always broken: - net: clear the dst when BPF is changing skb protocol (IPv4 <> IPv6) - sched: sfq: fix a potential crash on gso_skb handling - Bluetooth: intel: improve rx buffer posting to avoid causing issues in the firmware - eth: intel: i40e: make reset handling robust against multiple requests - eth: mlx5: ensure FW pages are always allocated on the local NUMA node, even when device is configure to 'serve' another node - wifi: ath12k: fix GCC_GCC_PCIE_HOT_RST definition for WCN7850, prevent kernel crashes - wifi: ath11k: avoid burning CPU in ath11k_debugfs_fw_stats_request() for 3 sec if fw_stats_done is not set" * tag 'net-6.16-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (70 commits) selftests: drv-net: rss_ctx: Add test for ntuple rules targeting default RSS context net: ethtool: Don't check if RSS context exists in case of context 0 af_unix: Allow passing cred for embryo without SO_PASSCRED/SO_PASSPIDFD. ipv6: Move fib6_config_validate() to ip6_route_add(). net: drv: netdevsim: don't napi_complete() from netpoll net/mlx5: HWS, Add error checking to hws_bwc_rule_complex_hash_node_get() veth: prevent NULL pointer dereference in veth_xdp_rcv net_sched: remove qdisc_tree_flush_backlog() net_sched: ets: fix a race in ets_qdisc_change() net_sched: tbf: fix a race in tbf_change() net_sched: red: fix a race in __red_change() net_sched: prio: fix a race in prio_tune() net_sched: sch_sfq: reject invalid perturb period net: phy: phy_caps: Don't skip better duplex macth on non-exact match MAINTAINERS: Update Kuniyuki Iwashima's email address. selftests: net: add test case for NAT46 looping back dst net: clear the dst when changing skb protocol net/mlx5e: Fix number of lanes to UNKNOWN when using data_rate_oper net/mlx5e: Fix leak of Geneve TLV option object net/mlx5: HWS, make sure the uplink is the last destination ...
This commit is contained in:
commit
27605c8c0f
3
.mailmap
3
.mailmap
@ -426,6 +426,9 @@ Krzysztof Wilczyński <kwilczynski@kernel.org> <krzysztof.wilczynski@linux.com>
|
||||
Krzysztof Wilczyński <kwilczynski@kernel.org> <kw@linux.com>
|
||||
Kshitiz Godara <quic_kgodara@quicinc.com> <kgodara@codeaurora.org>
|
||||
Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
|
||||
Kuniyuki Iwashima <kuniyu@google.com> <kuniyu@amazon.com>
|
||||
Kuniyuki Iwashima <kuniyu@google.com> <kuniyu@amazon.co.jp>
|
||||
Kuniyuki Iwashima <kuniyu@google.com> <kuni1840@gmail.com>
|
||||
Kuogee Hsieh <quic_khsieh@quicinc.com> <khsieh@codeaurora.org>
|
||||
Lee Jones <lee@kernel.org> <joneslee@google.com>
|
||||
Lee Jones <lee@kernel.org> <lee.jones@canonical.com>
|
||||
|
@ -17494,7 +17494,7 @@ F: tools/testing/selftests/net/srv6*
|
||||
NETWORKING [TCP]
|
||||
M: Eric Dumazet <edumazet@google.com>
|
||||
M: Neal Cardwell <ncardwell@google.com>
|
||||
R: Kuniyuki Iwashima <kuniyu@amazon.com>
|
||||
R: Kuniyuki Iwashima <kuniyu@google.com>
|
||||
L: netdev@vger.kernel.org
|
||||
S: Maintained
|
||||
F: Documentation/networking/net_cachelines/tcp_sock.rst
|
||||
@ -17524,7 +17524,7 @@ F: net/tls/*
|
||||
|
||||
NETWORKING [SOCKETS]
|
||||
M: Eric Dumazet <edumazet@google.com>
|
||||
M: Kuniyuki Iwashima <kuniyu@amazon.com>
|
||||
M: Kuniyuki Iwashima <kuniyu@google.com>
|
||||
M: Paolo Abeni <pabeni@redhat.com>
|
||||
M: Willem de Bruijn <willemb@google.com>
|
||||
S: Maintained
|
||||
@ -17539,7 +17539,7 @@ F: net/core/scm.c
|
||||
F: net/socket.c
|
||||
|
||||
NETWORKING [UNIX SOCKETS]
|
||||
M: Kuniyuki Iwashima <kuniyu@amazon.com>
|
||||
M: Kuniyuki Iwashima <kuniyu@google.com>
|
||||
S: Maintained
|
||||
F: include/net/af_unix.h
|
||||
F: include/net/netns/unix.h
|
||||
|
@ -396,8 +396,13 @@ static int btintel_pcie_submit_rx(struct btintel_pcie_data *data)
|
||||
static int btintel_pcie_start_rx(struct btintel_pcie_data *data)
|
||||
{
|
||||
int i, ret;
|
||||
struct rxq *rxq = &data->rxq;
|
||||
|
||||
for (i = 0; i < BTINTEL_PCIE_RX_MAX_QUEUE; i++) {
|
||||
/* Post (BTINTEL_PCIE_RX_DESCS_COUNT - 3) buffers to overcome the
|
||||
* hardware issues leading to race condition at the firmware.
|
||||
*/
|
||||
|
||||
for (i = 0; i < rxq->count - 3; i++) {
|
||||
ret = btintel_pcie_submit_rx(data);
|
||||
if (ret)
|
||||
return ret;
|
||||
@ -1782,8 +1787,8 @@ static int btintel_pcie_alloc(struct btintel_pcie_data *data)
|
||||
* + size of index * Number of queues(2) * type of index array(4)
|
||||
* + size of context information
|
||||
*/
|
||||
total = (sizeof(struct tfd) + sizeof(struct urbd0) + sizeof(struct frbd)
|
||||
+ sizeof(struct urbd1)) * BTINTEL_DESCS_COUNT;
|
||||
total = (sizeof(struct tfd) + sizeof(struct urbd0)) * BTINTEL_PCIE_TX_DESCS_COUNT;
|
||||
total += (sizeof(struct frbd) + sizeof(struct urbd1)) * BTINTEL_PCIE_RX_DESCS_COUNT;
|
||||
|
||||
/* Add the sum of size of index array and size of ci struct */
|
||||
total += (sizeof(u16) * BTINTEL_PCIE_NUM_QUEUES * 4) + sizeof(struct ctx_info);
|
||||
@ -1808,36 +1813,36 @@ static int btintel_pcie_alloc(struct btintel_pcie_data *data)
|
||||
data->dma_v_addr = v_addr;
|
||||
|
||||
/* Setup descriptor count */
|
||||
data->txq.count = BTINTEL_DESCS_COUNT;
|
||||
data->rxq.count = BTINTEL_DESCS_COUNT;
|
||||
data->txq.count = BTINTEL_PCIE_TX_DESCS_COUNT;
|
||||
data->rxq.count = BTINTEL_PCIE_RX_DESCS_COUNT;
|
||||
|
||||
/* Setup tfds */
|
||||
data->txq.tfds_p_addr = p_addr;
|
||||
data->txq.tfds = v_addr;
|
||||
|
||||
p_addr += (sizeof(struct tfd) * BTINTEL_DESCS_COUNT);
|
||||
v_addr += (sizeof(struct tfd) * BTINTEL_DESCS_COUNT);
|
||||
p_addr += (sizeof(struct tfd) * BTINTEL_PCIE_TX_DESCS_COUNT);
|
||||
v_addr += (sizeof(struct tfd) * BTINTEL_PCIE_TX_DESCS_COUNT);
|
||||
|
||||
/* Setup urbd0 */
|
||||
data->txq.urbd0s_p_addr = p_addr;
|
||||
data->txq.urbd0s = v_addr;
|
||||
|
||||
p_addr += (sizeof(struct urbd0) * BTINTEL_DESCS_COUNT);
|
||||
v_addr += (sizeof(struct urbd0) * BTINTEL_DESCS_COUNT);
|
||||
p_addr += (sizeof(struct urbd0) * BTINTEL_PCIE_TX_DESCS_COUNT);
|
||||
v_addr += (sizeof(struct urbd0) * BTINTEL_PCIE_TX_DESCS_COUNT);
|
||||
|
||||
/* Setup FRBD*/
|
||||
data->rxq.frbds_p_addr = p_addr;
|
||||
data->rxq.frbds = v_addr;
|
||||
|
||||
p_addr += (sizeof(struct frbd) * BTINTEL_DESCS_COUNT);
|
||||
v_addr += (sizeof(struct frbd) * BTINTEL_DESCS_COUNT);
|
||||
p_addr += (sizeof(struct frbd) * BTINTEL_PCIE_RX_DESCS_COUNT);
|
||||
v_addr += (sizeof(struct frbd) * BTINTEL_PCIE_RX_DESCS_COUNT);
|
||||
|
||||
/* Setup urbd1 */
|
||||
data->rxq.urbd1s_p_addr = p_addr;
|
||||
data->rxq.urbd1s = v_addr;
|
||||
|
||||
p_addr += (sizeof(struct urbd1) * BTINTEL_DESCS_COUNT);
|
||||
v_addr += (sizeof(struct urbd1) * BTINTEL_DESCS_COUNT);
|
||||
p_addr += (sizeof(struct urbd1) * BTINTEL_PCIE_RX_DESCS_COUNT);
|
||||
v_addr += (sizeof(struct urbd1) * BTINTEL_PCIE_RX_DESCS_COUNT);
|
||||
|
||||
/* Setup data buffers for txq */
|
||||
err = btintel_pcie_setup_txq_bufs(data, &data->txq);
|
||||
|
@ -154,8 +154,11 @@ enum msix_mbox_int_causes {
|
||||
/* Default interrupt timeout in msec */
|
||||
#define BTINTEL_DEFAULT_INTR_TIMEOUT_MS 3000
|
||||
|
||||
/* The number of descriptors in TX/RX queues */
|
||||
#define BTINTEL_DESCS_COUNT 16
|
||||
/* The number of descriptors in TX queues */
|
||||
#define BTINTEL_PCIE_TX_DESCS_COUNT 32
|
||||
|
||||
/* The number of descriptors in RX queues */
|
||||
#define BTINTEL_PCIE_RX_DESCS_COUNT 64
|
||||
|
||||
/* Number of Queue for TX and RX
|
||||
* It indicates the index of the IA(Index Array)
|
||||
@ -177,9 +180,6 @@ enum {
|
||||
/* Doorbell vector for TFD */
|
||||
#define BTINTEL_PCIE_TX_DB_VEC 0
|
||||
|
||||
/* Number of pending RX requests for downlink */
|
||||
#define BTINTEL_PCIE_RX_MAX_QUEUE 6
|
||||
|
||||
/* Doorbell vector for FRBD */
|
||||
#define BTINTEL_PCIE_RX_DB_VEC 513
|
||||
|
||||
|
@ -2034,9 +2034,6 @@ int b53_br_join(struct dsa_switch *ds, int port, struct dsa_bridge bridge,
|
||||
|
||||
b53_get_vlan_entry(dev, pvid, vl);
|
||||
vl->members &= ~BIT(port);
|
||||
if (vl->members == BIT(cpu_port))
|
||||
vl->members &= ~BIT(cpu_port);
|
||||
vl->untag = vl->members;
|
||||
b53_set_vlan_entry(dev, pvid, vl);
|
||||
}
|
||||
|
||||
@ -2115,8 +2112,7 @@ void b53_br_leave(struct dsa_switch *ds, int port, struct dsa_bridge bridge)
|
||||
}
|
||||
|
||||
b53_get_vlan_entry(dev, pvid, vl);
|
||||
vl->members |= BIT(port) | BIT(cpu_port);
|
||||
vl->untag |= BIT(port) | BIT(cpu_port);
|
||||
vl->members |= BIT(port);
|
||||
b53_set_vlan_entry(dev, pvid, vl);
|
||||
}
|
||||
}
|
||||
|
@ -614,8 +614,9 @@
|
||||
RX19_DONE_INT_MASK | RX18_DONE_INT_MASK | \
|
||||
RX17_DONE_INT_MASK | RX16_DONE_INT_MASK)
|
||||
|
||||
#define RX_DONE_INT_MASK (RX_DONE_HIGH_INT_MASK | RX_DONE_LOW_INT_MASK)
|
||||
#define RX_DONE_HIGH_OFFSET fls(RX_DONE_HIGH_INT_MASK)
|
||||
#define RX_DONE_INT_MASK \
|
||||
((RX_DONE_HIGH_INT_MASK << RX_DONE_HIGH_OFFSET) | RX_DONE_LOW_INT_MASK)
|
||||
|
||||
#define INT_RX2_MASK(_n) \
|
||||
((RX_NO_CPU_DSCP_HIGH_INT_MASK & (_n)) | \
|
||||
|
@ -1,6 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
config FSL_ENETC_CORE
|
||||
tristate
|
||||
select NXP_NETC_LIB if NXP_NTMP
|
||||
help
|
||||
This module supports common functionality between the PF and VF
|
||||
drivers for the NXP ENETC controller.
|
||||
@ -22,6 +23,9 @@ config NXP_NETC_LIB
|
||||
Switch, such as NETC Table Management Protocol (NTMP) 2.0, common tc
|
||||
flower and debugfs interfaces and so on.
|
||||
|
||||
config NXP_NTMP
|
||||
bool
|
||||
|
||||
config FSL_ENETC
|
||||
tristate "ENETC PF driver"
|
||||
depends on PCI_MSI
|
||||
@ -45,7 +49,7 @@ config NXP_ENETC4
|
||||
select FSL_ENETC_CORE
|
||||
select FSL_ENETC_MDIO
|
||||
select NXP_ENETC_PF_COMMON
|
||||
select NXP_NETC_LIB
|
||||
select NXP_NTMP
|
||||
select PHYLINK
|
||||
select DIMLIB
|
||||
help
|
||||
|
@ -477,10 +477,6 @@ static void e1000_down_and_stop(struct e1000_adapter *adapter)
|
||||
|
||||
cancel_delayed_work_sync(&adapter->phy_info_task);
|
||||
cancel_delayed_work_sync(&adapter->fifo_stall_task);
|
||||
|
||||
/* Only kill reset task if adapter is not resetting */
|
||||
if (!test_bit(__E1000_RESETTING, &adapter->flags))
|
||||
cancel_work_sync(&adapter->reset_task);
|
||||
}
|
||||
|
||||
void e1000_down(struct e1000_adapter *adapter)
|
||||
@ -1266,6 +1262,10 @@ static void e1000_remove(struct pci_dev *pdev)
|
||||
|
||||
unregister_netdev(netdev);
|
||||
|
||||
/* Only kill reset task if adapter is not resetting */
|
||||
if (!test_bit(__E1000_RESETTING, &adapter->flags))
|
||||
cancel_work_sync(&adapter->reset_task);
|
||||
|
||||
e1000_phy_hw_reset(hw);
|
||||
|
||||
kfree(adapter->tx_ring);
|
||||
|
@ -1546,8 +1546,8 @@ static void i40e_cleanup_reset_vf(struct i40e_vf *vf)
|
||||
* @vf: pointer to the VF structure
|
||||
* @flr: VFLR was issued or not
|
||||
*
|
||||
* Returns true if the VF is in reset, resets successfully, or resets
|
||||
* are disabled and false otherwise.
|
||||
* Return: True if reset was performed successfully or if resets are disabled.
|
||||
* False if reset is already in progress.
|
||||
**/
|
||||
bool i40e_reset_vf(struct i40e_vf *vf, bool flr)
|
||||
{
|
||||
@ -1566,7 +1566,7 @@ bool i40e_reset_vf(struct i40e_vf *vf, bool flr)
|
||||
|
||||
/* If VF is being reset already we don't need to continue. */
|
||||
if (test_and_set_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))
|
||||
return true;
|
||||
return false;
|
||||
|
||||
i40e_trigger_vf_reset(vf, flr);
|
||||
|
||||
@ -4328,7 +4328,10 @@ int i40e_vc_process_vflr_event(struct i40e_pf *pf)
|
||||
reg = rd32(hw, I40E_GLGEN_VFLRSTAT(reg_idx));
|
||||
if (reg & BIT(bit_idx))
|
||||
/* i40e_reset_vf will clear the bit in GLGEN_VFLRSTAT */
|
||||
i40e_reset_vf(vf, true);
|
||||
if (!i40e_reset_vf(vf, true)) {
|
||||
/* At least one VF did not finish resetting, retry next time */
|
||||
set_bit(__I40E_VFLR_EVENT_PENDING, pf->state);
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
@ -3209,6 +3209,17 @@ static void iavf_reset_task(struct work_struct *work)
|
||||
}
|
||||
|
||||
continue_reset:
|
||||
/* If we are still early in the state machine, just restart. */
|
||||
if (adapter->state <= __IAVF_INIT_FAILED) {
|
||||
iavf_shutdown_adminq(hw);
|
||||
iavf_change_state(adapter, __IAVF_STARTUP);
|
||||
iavf_startup(adapter);
|
||||
queue_delayed_work(adapter->wq, &adapter->watchdog_task,
|
||||
msecs_to_jiffies(30));
|
||||
netdev_unlock(netdev);
|
||||
return;
|
||||
}
|
||||
|
||||
/* We don't use netif_running() because it may be true prior to
|
||||
* ndo_open() returning, so we can't assume it means all our open
|
||||
* tasks have finished, since we're not holding the rtnl_lock here.
|
||||
|
@ -79,6 +79,23 @@ iavf_poll_virtchnl_msg(struct iavf_hw *hw, struct iavf_arq_event_info *event,
|
||||
return iavf_status_to_errno(status);
|
||||
received_op =
|
||||
(enum virtchnl_ops)le32_to_cpu(event->desc.cookie_high);
|
||||
|
||||
if (received_op == VIRTCHNL_OP_EVENT) {
|
||||
struct iavf_adapter *adapter = hw->back;
|
||||
struct virtchnl_pf_event *vpe =
|
||||
(struct virtchnl_pf_event *)event->msg_buf;
|
||||
|
||||
if (vpe->event != VIRTCHNL_EVENT_RESET_IMPENDING)
|
||||
continue;
|
||||
|
||||
dev_info(&adapter->pdev->dev, "Reset indication received from the PF\n");
|
||||
if (!(adapter->flags & IAVF_FLAG_RESET_PENDING))
|
||||
iavf_schedule_reset(adapter,
|
||||
IAVF_FLAG_RESET_PENDING);
|
||||
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
if (op_to_poll == received_op)
|
||||
break;
|
||||
}
|
||||
|
@ -2299,6 +2299,7 @@ static int ice_capture_crosststamp(ktime_t *device,
|
||||
ts = ((u64)ts_hi << 32) | ts_lo;
|
||||
system->cycles = ts;
|
||||
system->cs_id = CSID_X86_ART;
|
||||
system->use_nsecs = true;
|
||||
|
||||
/* Read Device source clock time */
|
||||
ts_lo = rd32(hw, cfg->dev_time_l[tmr_idx]);
|
||||
|
@ -43,7 +43,6 @@
|
||||
#include "en/fs_ethtool.h"
|
||||
|
||||
#define LANES_UNKNOWN 0
|
||||
#define MAX_LANES 8
|
||||
|
||||
void mlx5e_ethtool_get_drvinfo(struct mlx5e_priv *priv,
|
||||
struct ethtool_drvinfo *drvinfo)
|
||||
@ -1098,10 +1097,8 @@ static void get_link_properties(struct net_device *netdev,
|
||||
speed = info->speed;
|
||||
lanes = info->lanes;
|
||||
duplex = DUPLEX_FULL;
|
||||
} else if (data_rate_oper) {
|
||||
} else if (data_rate_oper)
|
||||
speed = 100 * data_rate_oper;
|
||||
lanes = MAX_LANES;
|
||||
}
|
||||
|
||||
out:
|
||||
link_ksettings->base.duplex = duplex;
|
||||
|
@ -2028,9 +2028,8 @@ err_out:
|
||||
return err;
|
||||
}
|
||||
|
||||
static bool mlx5_flow_has_geneve_opt(struct mlx5e_tc_flow *flow)
|
||||
static bool mlx5_flow_has_geneve_opt(struct mlx5_flow_spec *spec)
|
||||
{
|
||||
struct mlx5_flow_spec *spec = &flow->attr->parse_attr->spec;
|
||||
void *headers_v = MLX5_ADDR_OF(fte_match_param,
|
||||
spec->match_value,
|
||||
misc_parameters_3);
|
||||
@ -2069,7 +2068,7 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
|
||||
}
|
||||
complete_all(&flow->del_hw_done);
|
||||
|
||||
if (mlx5_flow_has_geneve_opt(flow))
|
||||
if (mlx5_flow_has_geneve_opt(&attr->parse_attr->spec))
|
||||
mlx5_geneve_tlv_option_del(priv->mdev->geneve);
|
||||
|
||||
if (flow->decap_route)
|
||||
@ -2574,12 +2573,13 @@ static int parse_tunnel_attr(struct mlx5e_priv *priv,
|
||||
|
||||
err = mlx5e_tc_tun_parse(filter_dev, priv, tmp_spec, f, match_level);
|
||||
if (err) {
|
||||
kvfree(tmp_spec);
|
||||
NL_SET_ERR_MSG_MOD(extack, "Failed to parse tunnel attributes");
|
||||
netdev_warn(priv->netdev, "Failed to parse tunnel attributes");
|
||||
return err;
|
||||
} else {
|
||||
err = mlx5e_tc_set_attr_rx_tun(flow, tmp_spec);
|
||||
}
|
||||
err = mlx5e_tc_set_attr_rx_tun(flow, tmp_spec);
|
||||
if (mlx5_flow_has_geneve_opt(tmp_spec))
|
||||
mlx5_geneve_tlv_option_del(priv->mdev->geneve);
|
||||
kvfree(tmp_spec);
|
||||
if (err)
|
||||
return err;
|
||||
|
@ -1295,12 +1295,15 @@ mlx5_eswitch_enable_pf_vf_vports(struct mlx5_eswitch *esw,
|
||||
ret = mlx5_eswitch_load_pf_vf_vport(esw, MLX5_VPORT_ECPF, enabled_events);
|
||||
if (ret)
|
||||
goto ecpf_err;
|
||||
if (mlx5_core_ec_sriov_enabled(esw->dev)) {
|
||||
ret = mlx5_eswitch_load_ec_vf_vports(esw, esw->esw_funcs.num_ec_vfs,
|
||||
enabled_events);
|
||||
if (ret)
|
||||
goto ec_vf_err;
|
||||
}
|
||||
}
|
||||
|
||||
/* Enable ECVF vports */
|
||||
if (mlx5_core_ec_sriov_enabled(esw->dev)) {
|
||||
ret = mlx5_eswitch_load_ec_vf_vports(esw,
|
||||
esw->esw_funcs.num_ec_vfs,
|
||||
enabled_events);
|
||||
if (ret)
|
||||
goto ec_vf_err;
|
||||
}
|
||||
|
||||
/* Enable VF vports */
|
||||
@ -1331,9 +1334,11 @@ void mlx5_eswitch_disable_pf_vf_vports(struct mlx5_eswitch *esw)
|
||||
{
|
||||
mlx5_eswitch_unload_vf_vports(esw, esw->esw_funcs.num_vfs);
|
||||
|
||||
if (mlx5_core_ec_sriov_enabled(esw->dev))
|
||||
mlx5_eswitch_unload_ec_vf_vports(esw,
|
||||
esw->esw_funcs.num_ec_vfs);
|
||||
|
||||
if (mlx5_ecpf_vport_exists(esw->dev)) {
|
||||
if (mlx5_core_ec_sriov_enabled(esw->dev))
|
||||
mlx5_eswitch_unload_ec_vf_vports(esw, esw->esw_funcs.num_vfs);
|
||||
mlx5_eswitch_unload_pf_vf_vport(esw, MLX5_VPORT_ECPF);
|
||||
}
|
||||
|
||||
|
@ -2228,6 +2228,7 @@ try_add_to_existing_fg(struct mlx5_flow_table *ft,
|
||||
struct mlx5_flow_handle *rule;
|
||||
struct match_list *iter;
|
||||
bool take_write = false;
|
||||
bool try_again = false;
|
||||
struct fs_fte *fte;
|
||||
u64 version = 0;
|
||||
int err;
|
||||
@ -2292,6 +2293,7 @@ skip_search:
|
||||
nested_down_write_ref_node(&g->node, FS_LOCK_PARENT);
|
||||
|
||||
if (!g->node.active) {
|
||||
try_again = true;
|
||||
up_write_ref_node(&g->node, false);
|
||||
continue;
|
||||
}
|
||||
@ -2313,7 +2315,8 @@ skip_search:
|
||||
tree_put_node(&fte->node, false);
|
||||
return rule;
|
||||
}
|
||||
rule = ERR_PTR(-ENOENT);
|
||||
err = try_again ? -EAGAIN : -ENOENT;
|
||||
rule = ERR_PTR(err);
|
||||
out:
|
||||
kmem_cache_free(steering->ftes_cache, fte);
|
||||
return rule;
|
||||
|
@ -291,7 +291,7 @@ static void free_4k(struct mlx5_core_dev *dev, u64 addr, u32 function)
|
||||
static int alloc_system_page(struct mlx5_core_dev *dev, u32 function)
|
||||
{
|
||||
struct device *device = mlx5_core_dma_dev(dev);
|
||||
int nid = dev_to_node(device);
|
||||
int nid = dev->priv.numa_node;
|
||||
struct page *page;
|
||||
u64 zero_addr = 1;
|
||||
u64 addr;
|
||||
|
@ -1370,8 +1370,8 @@ mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx,
|
||||
struct mlx5hws_cmd_set_fte_attr fte_attr = {0};
|
||||
struct mlx5hws_cmd_forward_tbl *fw_island;
|
||||
struct mlx5hws_action *action;
|
||||
u32 i /*, packet_reformat_id*/;
|
||||
int ret;
|
||||
int ret, last_dest_idx = -1;
|
||||
u32 i;
|
||||
|
||||
if (num_dest <= 1) {
|
||||
mlx5hws_err(ctx, "Action must have multiple dests\n");
|
||||
@ -1401,11 +1401,8 @@ mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx,
|
||||
dest_list[i].destination_id = dests[i].dest->dest_obj.obj_id;
|
||||
fte_attr.action_flags |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
|
||||
fte_attr.ignore_flow_level = ignore_flow_level;
|
||||
/* ToDo: In SW steering we have a handling of 'go to WIRE'
|
||||
* destination here by upper layer setting 'is_wire_ft' flag
|
||||
* if the destination is wire.
|
||||
* This is because uplink should be last dest in the list.
|
||||
*/
|
||||
if (dests[i].is_wire_ft)
|
||||
last_dest_idx = i;
|
||||
break;
|
||||
case MLX5HWS_ACTION_TYP_VPORT:
|
||||
dest_list[i].destination_type = MLX5_FLOW_DESTINATION_TYPE_VPORT;
|
||||
@ -1429,6 +1426,9 @@ mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx,
|
||||
}
|
||||
}
|
||||
|
||||
if (last_dest_idx != -1)
|
||||
swap(dest_list[last_dest_idx], dest_list[num_dest - 1]);
|
||||
|
||||
fte_attr.dests_num = num_dest;
|
||||
fte_attr.dests = dest_list;
|
||||
|
||||
|
@ -1070,7 +1070,7 @@ hws_bwc_rule_complex_hash_node_get(struct mlx5hws_bwc_rule *bwc_rule,
|
||||
struct mlx5hws_bwc_matcher *bwc_matcher = bwc_rule->bwc_matcher;
|
||||
struct mlx5hws_bwc_complex_rule_hash_node *node, *old_node;
|
||||
struct rhashtable *refcount_hash;
|
||||
int i;
|
||||
int ret, i;
|
||||
|
||||
bwc_rule->complex_hash_node = NULL;
|
||||
|
||||
@ -1078,7 +1078,11 @@ hws_bwc_rule_complex_hash_node_get(struct mlx5hws_bwc_rule *bwc_rule,
|
||||
if (unlikely(!node))
|
||||
return -ENOMEM;
|
||||
|
||||
node->tag = ida_alloc(&bwc_matcher->complex->metadata_ida, GFP_KERNEL);
|
||||
ret = ida_alloc(&bwc_matcher->complex->metadata_ida, GFP_KERNEL);
|
||||
if (ret < 0)
|
||||
goto err_free_node;
|
||||
node->tag = ret;
|
||||
|
||||
refcount_set(&node->refcount, 1);
|
||||
|
||||
/* Clear match buffer - turn off all the unrelated fields
|
||||
@ -1094,6 +1098,11 @@ hws_bwc_rule_complex_hash_node_get(struct mlx5hws_bwc_rule *bwc_rule,
|
||||
old_node = rhashtable_lookup_get_insert_fast(refcount_hash,
|
||||
&node->hash_node,
|
||||
hws_refcount_hash);
|
||||
if (IS_ERR(old_node)) {
|
||||
ret = PTR_ERR(old_node);
|
||||
goto err_free_ida;
|
||||
}
|
||||
|
||||
if (old_node) {
|
||||
/* Rule with the same tag already exists - update refcount */
|
||||
refcount_inc(&old_node->refcount);
|
||||
@ -1112,6 +1121,12 @@ hws_bwc_rule_complex_hash_node_get(struct mlx5hws_bwc_rule *bwc_rule,
|
||||
|
||||
bwc_rule->complex_hash_node = node;
|
||||
return 0;
|
||||
|
||||
err_free_ida:
|
||||
ida_free(&bwc_matcher->complex->metadata_ida, node->tag);
|
||||
err_free_node:
|
||||
kfree(node);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void
|
||||
|
@ -785,6 +785,9 @@ hws_definer_conv_outer(struct mlx5hws_definer_conv_data *cd,
|
||||
HWS_SET_HDR(fc, match_param, IP_PROTOCOL_O,
|
||||
outer_headers.ip_protocol,
|
||||
eth_l3_outer.protocol_next_header);
|
||||
HWS_SET_HDR(fc, match_param, IP_VERSION_O,
|
||||
outer_headers.ip_version,
|
||||
eth_l3_outer.ip_version);
|
||||
HWS_SET_HDR(fc, match_param, IP_TTL_O,
|
||||
outer_headers.ttl_hoplimit,
|
||||
eth_l3_outer.time_to_live_hop_limit);
|
||||
|
@ -966,6 +966,9 @@ static int mlx5_fs_fte_get_hws_actions(struct mlx5_flow_root_namespace *ns,
|
||||
switch (attr->type) {
|
||||
case MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE:
|
||||
dest_action = mlx5_fs_get_dest_action_ft(fs_ctx, dst);
|
||||
if (dst->dest_attr.ft->flags &
|
||||
MLX5_FLOW_TABLE_UPLINK_VPORT)
|
||||
dest_actions[num_dest_actions].is_wire_ft = true;
|
||||
break;
|
||||
case MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE_NUM:
|
||||
dest_action = mlx5_fs_get_dest_action_table_num(fs_ctx,
|
||||
@ -1357,6 +1360,7 @@ mlx5_cmd_hws_packet_reformat_alloc(struct mlx5_flow_root_namespace *ns,
|
||||
pkt_reformat->fs_hws_action.pr_data = pr_data;
|
||||
}
|
||||
|
||||
mutex_init(&pkt_reformat->fs_hws_action.lock);
|
||||
pkt_reformat->owner = MLX5_FLOW_RESOURCE_OWNER_HWS;
|
||||
pkt_reformat->fs_hws_action.hws_action = hws_action;
|
||||
return 0;
|
||||
@ -1503,7 +1507,6 @@ static int mlx5_cmd_hws_modify_header_alloc(struct mlx5_flow_root_namespace *ns,
|
||||
err = -ENOMEM;
|
||||
goto release_mh;
|
||||
}
|
||||
mutex_init(&modify_hdr->fs_hws_action.lock);
|
||||
modify_hdr->fs_hws_action.mh_data = mh_data;
|
||||
modify_hdr->fs_hws_action.fs_pool = pool;
|
||||
modify_hdr->owner = MLX5_FLOW_RESOURCE_OWNER_SW;
|
||||
|
@ -213,6 +213,7 @@ struct mlx5hws_action_dest_attr {
|
||||
struct mlx5hws_action *dest;
|
||||
/* Optional reformat action */
|
||||
struct mlx5hws_action *reformat;
|
||||
bool is_wire_ft;
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -247,15 +247,39 @@ static sci_t make_sci(const u8 *addr, __be16 port)
|
||||
return sci;
|
||||
}
|
||||
|
||||
static sci_t macsec_frame_sci(struct macsec_eth_header *hdr, bool sci_present)
|
||||
static sci_t macsec_active_sci(struct macsec_secy *secy)
|
||||
{
|
||||
sci_t sci;
|
||||
struct macsec_rx_sc *rx_sc = rcu_dereference_bh(secy->rx_sc);
|
||||
|
||||
if (sci_present)
|
||||
/* Case single RX SC */
|
||||
if (rx_sc && !rcu_dereference_bh(rx_sc->next))
|
||||
return (rx_sc->active) ? rx_sc->sci : 0;
|
||||
/* Case no RX SC or multiple */
|
||||
else
|
||||
return 0;
|
||||
}
|
||||
|
||||
static sci_t macsec_frame_sci(struct macsec_eth_header *hdr, bool sci_present,
|
||||
struct macsec_rxh_data *rxd)
|
||||
{
|
||||
struct macsec_dev *macsec;
|
||||
sci_t sci = 0;
|
||||
|
||||
/* SC = 1 */
|
||||
if (sci_present) {
|
||||
memcpy(&sci, hdr->secure_channel_id,
|
||||
sizeof(hdr->secure_channel_id));
|
||||
else
|
||||
/* SC = 0; ES = 0 */
|
||||
} else if ((!(hdr->tci_an & (MACSEC_TCI_ES | MACSEC_TCI_SC))) &&
|
||||
(list_is_singular(&rxd->secys))) {
|
||||
/* Only one SECY should exist on this scenario */
|
||||
macsec = list_first_or_null_rcu(&rxd->secys, struct macsec_dev,
|
||||
secys);
|
||||
if (macsec)
|
||||
return macsec_active_sci(&macsec->secy);
|
||||
} else {
|
||||
sci = make_sci(hdr->eth.h_source, MACSEC_PORT_ES);
|
||||
}
|
||||
|
||||
return sci;
|
||||
}
|
||||
@ -1109,7 +1133,7 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
|
||||
struct macsec_rxh_data *rxd;
|
||||
struct macsec_dev *macsec;
|
||||
unsigned int len;
|
||||
sci_t sci;
|
||||
sci_t sci = 0;
|
||||
u32 hdr_pn;
|
||||
bool cbit;
|
||||
struct pcpu_rx_sc_stats *rxsc_stats;
|
||||
@ -1156,11 +1180,14 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
|
||||
|
||||
macsec_skb_cb(skb)->has_sci = !!(hdr->tci_an & MACSEC_TCI_SC);
|
||||
macsec_skb_cb(skb)->assoc_num = hdr->tci_an & MACSEC_AN_MASK;
|
||||
sci = macsec_frame_sci(hdr, macsec_skb_cb(skb)->has_sci);
|
||||
|
||||
rcu_read_lock();
|
||||
rxd = macsec_data_rcu(skb->dev);
|
||||
|
||||
sci = macsec_frame_sci(hdr, macsec_skb_cb(skb)->has_sci, rxd);
|
||||
if (!sci)
|
||||
goto drop_nosc;
|
||||
|
||||
list_for_each_entry_rcu(macsec, &rxd->secys, secys) {
|
||||
struct macsec_rx_sc *sc = find_rx_sc(&macsec->secy, sci);
|
||||
|
||||
@ -1283,6 +1310,7 @@ drop:
|
||||
macsec_rxsa_put(rx_sa);
|
||||
drop_nosa:
|
||||
macsec_rxsc_put(rx_sc);
|
||||
drop_nosc:
|
||||
rcu_read_unlock();
|
||||
drop_direct:
|
||||
kfree_skb(skb);
|
||||
|
@ -1252,7 +1252,6 @@ static int sysdata_append_release(struct netconsole_target *nt, int offset)
|
||||
*/
|
||||
static int prepare_extradata(struct netconsole_target *nt)
|
||||
{
|
||||
u32 fields = SYSDATA_CPU_NR | SYSDATA_TASKNAME;
|
||||
int extradata_len;
|
||||
|
||||
/* userdata was appended when configfs write helper was called
|
||||
@ -1260,7 +1259,7 @@ static int prepare_extradata(struct netconsole_target *nt)
|
||||
*/
|
||||
extradata_len = nt->userdata_length;
|
||||
|
||||
if (!(nt->sysdata_fields & fields))
|
||||
if (!nt->sysdata_fields)
|
||||
goto out;
|
||||
|
||||
if (nt->sysdata_fields & SYSDATA_CPU_NR)
|
||||
|
@ -371,7 +371,8 @@ static int nsim_poll(struct napi_struct *napi, int budget)
|
||||
int done;
|
||||
|
||||
done = nsim_rcv(rq, budget);
|
||||
napi_complete(napi);
|
||||
if (done < budget)
|
||||
napi_complete_done(napi, done);
|
||||
|
||||
return done;
|
||||
}
|
||||
|
@ -445,6 +445,9 @@ int __mdiobus_read(struct mii_bus *bus, int addr, u32 regnum)
|
||||
|
||||
lockdep_assert_held_once(&bus->mdio_lock);
|
||||
|
||||
if (addr >= PHY_MAX_ADDR)
|
||||
return -ENXIO;
|
||||
|
||||
if (bus->read)
|
||||
retval = bus->read(bus, addr, regnum);
|
||||
else
|
||||
@ -474,6 +477,9 @@ int __mdiobus_write(struct mii_bus *bus, int addr, u32 regnum, u16 val)
|
||||
|
||||
lockdep_assert_held_once(&bus->mdio_lock);
|
||||
|
||||
if (addr >= PHY_MAX_ADDR)
|
||||
return -ENXIO;
|
||||
|
||||
if (bus->write)
|
||||
err = bus->write(bus, addr, regnum, val);
|
||||
else
|
||||
@ -535,6 +541,9 @@ int __mdiobus_c45_read(struct mii_bus *bus, int addr, int devad, u32 regnum)
|
||||
|
||||
lockdep_assert_held_once(&bus->mdio_lock);
|
||||
|
||||
if (addr >= PHY_MAX_ADDR)
|
||||
return -ENXIO;
|
||||
|
||||
if (bus->read_c45)
|
||||
retval = bus->read_c45(bus, addr, devad, regnum);
|
||||
else
|
||||
@ -566,6 +575,9 @@ int __mdiobus_c45_write(struct mii_bus *bus, int addr, int devad, u32 regnum,
|
||||
|
||||
lockdep_assert_held_once(&bus->mdio_lock);
|
||||
|
||||
if (addr >= PHY_MAX_ADDR)
|
||||
return -ENXIO;
|
||||
|
||||
if (bus->write_c45)
|
||||
err = bus->write_c45(bus, addr, devad, regnum, val);
|
||||
else
|
||||
|
@ -188,6 +188,9 @@ phy_caps_lookup_by_linkmode_rev(const unsigned long *linkmodes, bool fdx_only)
|
||||
* When @exact is not set, we return either an exact match, or matching capabilities
|
||||
* at lower speed, or the lowest matching speed, or NULL.
|
||||
*
|
||||
* Non-exact matches will try to return an exact speed and duplex match, but may
|
||||
* return matching capabilities with same speed but a different duplex.
|
||||
*
|
||||
* Returns: a matched link_capabilities according to the above process, NULL
|
||||
* otherwise.
|
||||
*/
|
||||
@ -195,7 +198,7 @@ const struct link_capabilities *
|
||||
phy_caps_lookup(int speed, unsigned int duplex, const unsigned long *supported,
|
||||
bool exact)
|
||||
{
|
||||
const struct link_capabilities *lcap, *last = NULL;
|
||||
const struct link_capabilities *lcap, *match = NULL, *last = NULL;
|
||||
|
||||
for_each_link_caps_desc_speed(lcap) {
|
||||
if (linkmode_intersects(lcap->linkmodes, supported)) {
|
||||
@ -204,16 +207,19 @@ phy_caps_lookup(int speed, unsigned int duplex, const unsigned long *supported,
|
||||
if (lcap->speed == speed && lcap->duplex == duplex) {
|
||||
return lcap;
|
||||
} else if (!exact) {
|
||||
if (lcap->speed <= speed)
|
||||
return lcap;
|
||||
if (!match && lcap->speed <= speed)
|
||||
match = lcap;
|
||||
|
||||
if (lcap->speed < speed)
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!exact)
|
||||
return last;
|
||||
if (!match && !exact)
|
||||
match = last;
|
||||
|
||||
return NULL;
|
||||
return match;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(phy_caps_lookup);
|
||||
|
||||
|
@ -10054,6 +10054,7 @@ static const struct usb_device_id rtl8152_table[] = {
|
||||
{ USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041) },
|
||||
{ USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff) },
|
||||
{ USB_DEVICE(VENDOR_ID_TPLINK, 0x0601) },
|
||||
{ USB_DEVICE(VENDOR_ID_TPLINK, 0x0602) },
|
||||
{ USB_DEVICE(VENDOR_ID_DLINK, 0xb301) },
|
||||
{ USB_DEVICE(VENDOR_ID_DELL, 0xb097) },
|
||||
{ USB_DEVICE(VENDOR_ID_ASUS, 0x1976) },
|
||||
|
@ -909,7 +909,7 @@ static int veth_xdp_rcv(struct veth_rq *rq, int budget,
|
||||
|
||||
/* NAPI functions as RCU section */
|
||||
peer_dev = rcu_dereference_check(priv->peer, rcu_read_lock_bh_held());
|
||||
peer_txq = netdev_get_tx_queue(peer_dev, queue_idx);
|
||||
peer_txq = peer_dev ? netdev_get_tx_queue(peer_dev, queue_idx) : NULL;
|
||||
|
||||
for (i = 0; i < budget; i++) {
|
||||
void *ptr = __ptr_ring_consume(&rq->xdp_ring);
|
||||
@ -959,7 +959,7 @@ static int veth_xdp_rcv(struct veth_rq *rq, int budget,
|
||||
rq->stats.vs.xdp_packets += done;
|
||||
u64_stats_update_end(&rq->stats.syncp);
|
||||
|
||||
if (unlikely(netif_tx_queue_stopped(peer_txq)))
|
||||
if (peer_txq && unlikely(netif_tx_queue_stopped(peer_txq)))
|
||||
netif_tx_wake_queue(peer_txq);
|
||||
|
||||
return done;
|
||||
|
@ -4,6 +4,7 @@
|
||||
* Copyright (c) 2011-2017 Qualcomm Atheros, Inc.
|
||||
* Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
|
||||
* Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
|
||||
* Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries.
|
||||
*/
|
||||
|
||||
#include "mac.h"
|
||||
@ -1022,6 +1023,26 @@ static inline int ath10k_vdev_setup_sync(struct ath10k *ar)
|
||||
return ar->last_wmi_vdev_start_status;
|
||||
}
|
||||
|
||||
static inline int ath10k_vdev_delete_sync(struct ath10k *ar)
|
||||
{
|
||||
unsigned long time_left;
|
||||
|
||||
lockdep_assert_held(&ar->conf_mutex);
|
||||
|
||||
if (!test_bit(WMI_SERVICE_SYNC_DELETE_CMDS, ar->wmi.svc_map))
|
||||
return 0;
|
||||
|
||||
if (test_bit(ATH10K_FLAG_CRASH_FLUSH, &ar->dev_flags))
|
||||
return -ESHUTDOWN;
|
||||
|
||||
time_left = wait_for_completion_timeout(&ar->vdev_delete_done,
|
||||
ATH10K_VDEV_DELETE_TIMEOUT_HZ);
|
||||
if (time_left == 0)
|
||||
return -ETIMEDOUT;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int ath10k_monitor_vdev_start(struct ath10k *ar, int vdev_id)
|
||||
{
|
||||
struct cfg80211_chan_def *chandef = NULL;
|
||||
@ -5900,7 +5921,6 @@ static void ath10k_remove_interface(struct ieee80211_hw *hw,
|
||||
struct ath10k *ar = hw->priv;
|
||||
struct ath10k_vif *arvif = (void *)vif->drv_priv;
|
||||
struct ath10k_peer *peer;
|
||||
unsigned long time_left;
|
||||
int ret;
|
||||
int i;
|
||||
|
||||
@ -5940,13 +5960,10 @@ static void ath10k_remove_interface(struct ieee80211_hw *hw,
|
||||
ath10k_warn(ar, "failed to delete WMI vdev %i: %d\n",
|
||||
arvif->vdev_id, ret);
|
||||
|
||||
if (test_bit(WMI_SERVICE_SYNC_DELETE_CMDS, ar->wmi.svc_map)) {
|
||||
time_left = wait_for_completion_timeout(&ar->vdev_delete_done,
|
||||
ATH10K_VDEV_DELETE_TIMEOUT_HZ);
|
||||
if (time_left == 0) {
|
||||
ath10k_warn(ar, "Timeout in receiving vdev delete response\n");
|
||||
goto out;
|
||||
}
|
||||
ret = ath10k_vdev_delete_sync(ar);
|
||||
if (ret) {
|
||||
ath10k_warn(ar, "Error in receiving vdev delete response: %d\n", ret);
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Some firmware revisions don't notify host about self-peer removal
|
||||
|
@ -938,7 +938,9 @@ static int ath10k_snoc_hif_start(struct ath10k *ar)
|
||||
|
||||
dev_set_threaded(ar->napi_dev, true);
|
||||
ath10k_core_napi_enable(ar);
|
||||
ath10k_snoc_irq_enable(ar);
|
||||
/* IRQs are left enabled when we restart due to a firmware crash */
|
||||
if (!test_bit(ATH10K_SNOC_FLAG_RECOVERY, &ar_snoc->flags))
|
||||
ath10k_snoc_irq_enable(ar);
|
||||
ath10k_snoc_rx_post(ar);
|
||||
|
||||
clear_bit(ATH10K_SNOC_FLAG_RECOVERY, &ar_snoc->flags);
|
||||
|
@ -990,6 +990,7 @@ void ath11k_fw_stats_init(struct ath11k *ar)
|
||||
INIT_LIST_HEAD(&ar->fw_stats.bcn);
|
||||
|
||||
init_completion(&ar->fw_stats_complete);
|
||||
init_completion(&ar->fw_stats_done);
|
||||
}
|
||||
|
||||
void ath11k_fw_stats_free(struct ath11k_fw_stats *stats)
|
||||
@ -2134,6 +2135,20 @@ int ath11k_core_qmi_firmware_ready(struct ath11k_base *ab)
|
||||
{
|
||||
int ret;
|
||||
|
||||
switch (ath11k_crypto_mode) {
|
||||
case ATH11K_CRYPT_MODE_SW:
|
||||
set_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags);
|
||||
set_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags);
|
||||
break;
|
||||
case ATH11K_CRYPT_MODE_HW:
|
||||
clear_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags);
|
||||
clear_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags);
|
||||
break;
|
||||
default:
|
||||
ath11k_info(ab, "invalid crypto_mode: %d\n", ath11k_crypto_mode);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
ret = ath11k_core_start_firmware(ab, ab->fw_mode);
|
||||
if (ret) {
|
||||
ath11k_err(ab, "failed to start firmware: %d\n", ret);
|
||||
@ -2152,20 +2167,6 @@ int ath11k_core_qmi_firmware_ready(struct ath11k_base *ab)
|
||||
goto err_firmware_stop;
|
||||
}
|
||||
|
||||
switch (ath11k_crypto_mode) {
|
||||
case ATH11K_CRYPT_MODE_SW:
|
||||
set_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags);
|
||||
set_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags);
|
||||
break;
|
||||
case ATH11K_CRYPT_MODE_HW:
|
||||
clear_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags);
|
||||
clear_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags);
|
||||
break;
|
||||
default:
|
||||
ath11k_info(ab, "invalid crypto_mode: %d\n", ath11k_crypto_mode);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (ath11k_frame_mode == ATH11K_HW_TXRX_RAW)
|
||||
set_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags);
|
||||
|
||||
|
@ -600,6 +600,8 @@ struct ath11k_fw_stats {
|
||||
struct list_head pdevs;
|
||||
struct list_head vdevs;
|
||||
struct list_head bcn;
|
||||
u32 num_vdev_recvd;
|
||||
u32 num_bcn_recvd;
|
||||
};
|
||||
|
||||
struct ath11k_dbg_htt_stats {
|
||||
@ -784,7 +786,7 @@ struct ath11k {
|
||||
u8 alpha2[REG_ALPHA2_LEN + 1];
|
||||
struct ath11k_fw_stats fw_stats;
|
||||
struct completion fw_stats_complete;
|
||||
bool fw_stats_done;
|
||||
struct completion fw_stats_done;
|
||||
|
||||
/* protected by conf_mutex */
|
||||
bool ps_state_enable;
|
||||
|
@ -1,7 +1,7 @@
|
||||
// SPDX-License-Identifier: BSD-3-Clause-Clear
|
||||
/*
|
||||
* Copyright (c) 2018-2020 The Linux Foundation. All rights reserved.
|
||||
* Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
|
||||
* Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
|
||||
*/
|
||||
|
||||
#include <linux/vmalloc.h>
|
||||
@ -93,57 +93,14 @@ void ath11k_debugfs_add_dbring_entry(struct ath11k *ar,
|
||||
spin_unlock_bh(&dbr_data->lock);
|
||||
}
|
||||
|
||||
static void ath11k_debugfs_fw_stats_reset(struct ath11k *ar)
|
||||
{
|
||||
spin_lock_bh(&ar->data_lock);
|
||||
ar->fw_stats_done = false;
|
||||
ath11k_fw_stats_pdevs_free(&ar->fw_stats.pdevs);
|
||||
ath11k_fw_stats_vdevs_free(&ar->fw_stats.vdevs);
|
||||
spin_unlock_bh(&ar->data_lock);
|
||||
}
|
||||
|
||||
void ath11k_debugfs_fw_stats_process(struct ath11k *ar, struct ath11k_fw_stats *stats)
|
||||
{
|
||||
struct ath11k_base *ab = ar->ab;
|
||||
struct ath11k_pdev *pdev;
|
||||
bool is_end;
|
||||
static unsigned int num_vdev, num_bcn;
|
||||
size_t total_vdevs_started = 0;
|
||||
int i;
|
||||
|
||||
/* WMI_REQUEST_PDEV_STAT request has been already processed */
|
||||
|
||||
if (stats->stats_id == WMI_REQUEST_RSSI_PER_CHAIN_STAT) {
|
||||
ar->fw_stats_done = true;
|
||||
return;
|
||||
}
|
||||
|
||||
if (stats->stats_id == WMI_REQUEST_VDEV_STAT) {
|
||||
if (list_empty(&stats->vdevs)) {
|
||||
ath11k_warn(ab, "empty vdev stats");
|
||||
return;
|
||||
}
|
||||
/* FW sends all the active VDEV stats irrespective of PDEV,
|
||||
* hence limit until the count of all VDEVs started
|
||||
*/
|
||||
for (i = 0; i < ab->num_radios; i++) {
|
||||
pdev = rcu_dereference(ab->pdevs_active[i]);
|
||||
if (pdev && pdev->ar)
|
||||
total_vdevs_started += ar->num_started_vdevs;
|
||||
}
|
||||
|
||||
is_end = ((++num_vdev) == total_vdevs_started);
|
||||
|
||||
list_splice_tail_init(&stats->vdevs,
|
||||
&ar->fw_stats.vdevs);
|
||||
|
||||
if (is_end) {
|
||||
ar->fw_stats_done = true;
|
||||
num_vdev = 0;
|
||||
}
|
||||
return;
|
||||
}
|
||||
bool is_end = true;
|
||||
|
||||
/* WMI_REQUEST_PDEV_STAT, WMI_REQUEST_RSSI_PER_CHAIN_STAT and
|
||||
* WMI_REQUEST_VDEV_STAT requests have been already processed.
|
||||
*/
|
||||
if (stats->stats_id == WMI_REQUEST_BCN_STAT) {
|
||||
if (list_empty(&stats->bcn)) {
|
||||
ath11k_warn(ab, "empty bcn stats");
|
||||
@ -152,97 +109,18 @@ void ath11k_debugfs_fw_stats_process(struct ath11k *ar, struct ath11k_fw_stats *
|
||||
/* Mark end until we reached the count of all started VDEVs
|
||||
* within the PDEV
|
||||
*/
|
||||
is_end = ((++num_bcn) == ar->num_started_vdevs);
|
||||
if (ar->num_started_vdevs)
|
||||
is_end = ((++ar->fw_stats.num_bcn_recvd) ==
|
||||
ar->num_started_vdevs);
|
||||
|
||||
list_splice_tail_init(&stats->bcn,
|
||||
&ar->fw_stats.bcn);
|
||||
|
||||
if (is_end) {
|
||||
ar->fw_stats_done = true;
|
||||
num_bcn = 0;
|
||||
}
|
||||
if (is_end)
|
||||
complete(&ar->fw_stats_done);
|
||||
}
|
||||
}
|
||||
|
||||
static int ath11k_debugfs_fw_stats_request(struct ath11k *ar,
|
||||
struct stats_request_params *req_param)
|
||||
{
|
||||
struct ath11k_base *ab = ar->ab;
|
||||
unsigned long timeout, time_left;
|
||||
int ret;
|
||||
|
||||
lockdep_assert_held(&ar->conf_mutex);
|
||||
|
||||
/* FW stats can get split when exceeding the stats data buffer limit.
|
||||
* In that case, since there is no end marking for the back-to-back
|
||||
* received 'update stats' event, we keep a 3 seconds timeout in case,
|
||||
* fw_stats_done is not marked yet
|
||||
*/
|
||||
timeout = jiffies + secs_to_jiffies(3);
|
||||
|
||||
ath11k_debugfs_fw_stats_reset(ar);
|
||||
|
||||
reinit_completion(&ar->fw_stats_complete);
|
||||
|
||||
ret = ath11k_wmi_send_stats_request_cmd(ar, req_param);
|
||||
|
||||
if (ret) {
|
||||
ath11k_warn(ab, "could not request fw stats (%d)\n",
|
||||
ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
time_left = wait_for_completion_timeout(&ar->fw_stats_complete, 1 * HZ);
|
||||
|
||||
if (!time_left)
|
||||
return -ETIMEDOUT;
|
||||
|
||||
for (;;) {
|
||||
if (time_after(jiffies, timeout))
|
||||
break;
|
||||
|
||||
spin_lock_bh(&ar->data_lock);
|
||||
if (ar->fw_stats_done) {
|
||||
spin_unlock_bh(&ar->data_lock);
|
||||
break;
|
||||
}
|
||||
spin_unlock_bh(&ar->data_lock);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
int ath11k_debugfs_get_fw_stats(struct ath11k *ar, u32 pdev_id,
|
||||
u32 vdev_id, u32 stats_id)
|
||||
{
|
||||
struct ath11k_base *ab = ar->ab;
|
||||
struct stats_request_params req_param;
|
||||
int ret;
|
||||
|
||||
mutex_lock(&ar->conf_mutex);
|
||||
|
||||
if (ar->state != ATH11K_STATE_ON) {
|
||||
ret = -ENETDOWN;
|
||||
goto err_unlock;
|
||||
}
|
||||
|
||||
req_param.pdev_id = pdev_id;
|
||||
req_param.vdev_id = vdev_id;
|
||||
req_param.stats_id = stats_id;
|
||||
|
||||
ret = ath11k_debugfs_fw_stats_request(ar, &req_param);
|
||||
if (ret)
|
||||
ath11k_warn(ab, "failed to request fw stats: %d\n", ret);
|
||||
|
||||
ath11k_dbg(ab, ATH11K_DBG_WMI,
|
||||
"debug get fw stat pdev id %d vdev id %d stats id 0x%x\n",
|
||||
pdev_id, vdev_id, stats_id);
|
||||
|
||||
err_unlock:
|
||||
mutex_unlock(&ar->conf_mutex);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int ath11k_open_pdev_stats(struct inode *inode, struct file *file)
|
||||
{
|
||||
struct ath11k *ar = inode->i_private;
|
||||
@ -268,7 +146,7 @@ static int ath11k_open_pdev_stats(struct inode *inode, struct file *file)
|
||||
req_param.vdev_id = 0;
|
||||
req_param.stats_id = WMI_REQUEST_PDEV_STAT;
|
||||
|
||||
ret = ath11k_debugfs_fw_stats_request(ar, &req_param);
|
||||
ret = ath11k_mac_fw_stats_request(ar, &req_param);
|
||||
if (ret) {
|
||||
ath11k_warn(ab, "failed to request fw pdev stats: %d\n", ret);
|
||||
goto err_free;
|
||||
@ -339,7 +217,7 @@ static int ath11k_open_vdev_stats(struct inode *inode, struct file *file)
|
||||
req_param.vdev_id = 0;
|
||||
req_param.stats_id = WMI_REQUEST_VDEV_STAT;
|
||||
|
||||
ret = ath11k_debugfs_fw_stats_request(ar, &req_param);
|
||||
ret = ath11k_mac_fw_stats_request(ar, &req_param);
|
||||
if (ret) {
|
||||
ath11k_warn(ar->ab, "failed to request fw vdev stats: %d\n", ret);
|
||||
goto err_free;
|
||||
@ -415,7 +293,7 @@ static int ath11k_open_bcn_stats(struct inode *inode, struct file *file)
|
||||
continue;
|
||||
|
||||
req_param.vdev_id = arvif->vdev_id;
|
||||
ret = ath11k_debugfs_fw_stats_request(ar, &req_param);
|
||||
ret = ath11k_mac_fw_stats_request(ar, &req_param);
|
||||
if (ret) {
|
||||
ath11k_warn(ar->ab, "failed to request fw bcn stats: %d\n", ret);
|
||||
goto err_free;
|
||||
|
@ -1,7 +1,7 @@
|
||||
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
|
||||
/*
|
||||
* Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
|
||||
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
|
||||
* Copyright (c) 2021-2022, 2025 Qualcomm Innovation Center, Inc. All rights reserved.
|
||||
*/
|
||||
|
||||
#ifndef _ATH11K_DEBUGFS_H_
|
||||
@ -273,8 +273,6 @@ void ath11k_debugfs_unregister(struct ath11k *ar);
|
||||
void ath11k_debugfs_fw_stats_process(struct ath11k *ar, struct ath11k_fw_stats *stats);
|
||||
|
||||
void ath11k_debugfs_fw_stats_init(struct ath11k *ar);
|
||||
int ath11k_debugfs_get_fw_stats(struct ath11k *ar, u32 pdev_id,
|
||||
u32 vdev_id, u32 stats_id);
|
||||
|
||||
static inline bool ath11k_debugfs_is_pktlog_lite_mode_enabled(struct ath11k *ar)
|
||||
{
|
||||
@ -381,12 +379,6 @@ static inline int ath11k_debugfs_rx_filter(struct ath11k *ar)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int ath11k_debugfs_get_fw_stats(struct ath11k *ar,
|
||||
u32 pdev_id, u32 vdev_id, u32 stats_id)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void
|
||||
ath11k_debugfs_add_dbring_entry(struct ath11k *ar,
|
||||
enum wmi_direct_buffer_module id,
|
||||
|
@ -8997,6 +8997,81 @@ static void ath11k_mac_put_chain_rssi(struct station_info *sinfo,
|
||||
}
|
||||
}
|
||||
|
||||
static void ath11k_mac_fw_stats_reset(struct ath11k *ar)
|
||||
{
|
||||
spin_lock_bh(&ar->data_lock);
|
||||
ath11k_fw_stats_pdevs_free(&ar->fw_stats.pdevs);
|
||||
ath11k_fw_stats_vdevs_free(&ar->fw_stats.vdevs);
|
||||
ar->fw_stats.num_vdev_recvd = 0;
|
||||
ar->fw_stats.num_bcn_recvd = 0;
|
||||
spin_unlock_bh(&ar->data_lock);
|
||||
}
|
||||
|
||||
int ath11k_mac_fw_stats_request(struct ath11k *ar,
|
||||
struct stats_request_params *req_param)
|
||||
{
|
||||
struct ath11k_base *ab = ar->ab;
|
||||
unsigned long time_left;
|
||||
int ret;
|
||||
|
||||
lockdep_assert_held(&ar->conf_mutex);
|
||||
|
||||
ath11k_mac_fw_stats_reset(ar);
|
||||
|
||||
reinit_completion(&ar->fw_stats_complete);
|
||||
reinit_completion(&ar->fw_stats_done);
|
||||
|
||||
ret = ath11k_wmi_send_stats_request_cmd(ar, req_param);
|
||||
|
||||
if (ret) {
|
||||
ath11k_warn(ab, "could not request fw stats (%d)\n",
|
||||
ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
time_left = wait_for_completion_timeout(&ar->fw_stats_complete, 1 * HZ);
|
||||
if (!time_left)
|
||||
return -ETIMEDOUT;
|
||||
|
||||
/* FW stats can get split when exceeding the stats data buffer limit.
|
||||
* In that case, since there is no end marking for the back-to-back
|
||||
* received 'update stats' event, we keep a 3 seconds timeout in case,
|
||||
* fw_stats_done is not marked yet
|
||||
*/
|
||||
time_left = wait_for_completion_timeout(&ar->fw_stats_done, 3 * HZ);
|
||||
if (!time_left)
|
||||
return -ETIMEDOUT;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int ath11k_mac_get_fw_stats(struct ath11k *ar, u32 pdev_id,
|
||||
u32 vdev_id, u32 stats_id)
|
||||
{
|
||||
struct ath11k_base *ab = ar->ab;
|
||||
struct stats_request_params req_param;
|
||||
int ret;
|
||||
|
||||
lockdep_assert_held(&ar->conf_mutex);
|
||||
|
||||
if (ar->state != ATH11K_STATE_ON)
|
||||
return -ENETDOWN;
|
||||
|
||||
req_param.pdev_id = pdev_id;
|
||||
req_param.vdev_id = vdev_id;
|
||||
req_param.stats_id = stats_id;
|
||||
|
||||
ret = ath11k_mac_fw_stats_request(ar, &req_param);
|
||||
if (ret)
|
||||
ath11k_warn(ab, "failed to request fw stats: %d\n", ret);
|
||||
|
||||
ath11k_dbg(ab, ATH11K_DBG_WMI,
|
||||
"debug get fw stat pdev id %d vdev id %d stats id 0x%x\n",
|
||||
pdev_id, vdev_id, stats_id);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void ath11k_mac_op_sta_statistics(struct ieee80211_hw *hw,
|
||||
struct ieee80211_vif *vif,
|
||||
struct ieee80211_sta *sta,
|
||||
@ -9031,11 +9106,12 @@ static void ath11k_mac_op_sta_statistics(struct ieee80211_hw *hw,
|
||||
|
||||
ath11k_mac_put_chain_rssi(sinfo, arsta, "ppdu", false);
|
||||
|
||||
mutex_lock(&ar->conf_mutex);
|
||||
if (!(sinfo->filled & BIT_ULL(NL80211_STA_INFO_CHAIN_SIGNAL)) &&
|
||||
arsta->arvif->vdev_type == WMI_VDEV_TYPE_STA &&
|
||||
ar->ab->hw_params.supports_rssi_stats &&
|
||||
!ath11k_debugfs_get_fw_stats(ar, ar->pdev->pdev_id, 0,
|
||||
WMI_REQUEST_RSSI_PER_CHAIN_STAT)) {
|
||||
!ath11k_mac_get_fw_stats(ar, ar->pdev->pdev_id, 0,
|
||||
WMI_REQUEST_RSSI_PER_CHAIN_STAT)) {
|
||||
ath11k_mac_put_chain_rssi(sinfo, arsta, "fw stats", true);
|
||||
}
|
||||
|
||||
@ -9043,9 +9119,10 @@ static void ath11k_mac_op_sta_statistics(struct ieee80211_hw *hw,
|
||||
if (!signal &&
|
||||
arsta->arvif->vdev_type == WMI_VDEV_TYPE_STA &&
|
||||
ar->ab->hw_params.supports_rssi_stats &&
|
||||
!(ath11k_debugfs_get_fw_stats(ar, ar->pdev->pdev_id, 0,
|
||||
WMI_REQUEST_VDEV_STAT)))
|
||||
!(ath11k_mac_get_fw_stats(ar, ar->pdev->pdev_id, 0,
|
||||
WMI_REQUEST_VDEV_STAT)))
|
||||
signal = arsta->rssi_beacon;
|
||||
mutex_unlock(&ar->conf_mutex);
|
||||
|
||||
ath11k_dbg(ar->ab, ATH11K_DBG_MAC,
|
||||
"sta statistics db2dbm %u rssi comb %d rssi beacon %d\n",
|
||||
@ -9380,38 +9457,6 @@ exit:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int ath11k_fw_stats_request(struct ath11k *ar,
|
||||
struct stats_request_params *req_param)
|
||||
{
|
||||
struct ath11k_base *ab = ar->ab;
|
||||
unsigned long time_left;
|
||||
int ret;
|
||||
|
||||
lockdep_assert_held(&ar->conf_mutex);
|
||||
|
||||
spin_lock_bh(&ar->data_lock);
|
||||
ar->fw_stats_done = false;
|
||||
ath11k_fw_stats_pdevs_free(&ar->fw_stats.pdevs);
|
||||
spin_unlock_bh(&ar->data_lock);
|
||||
|
||||
reinit_completion(&ar->fw_stats_complete);
|
||||
|
||||
ret = ath11k_wmi_send_stats_request_cmd(ar, req_param);
|
||||
if (ret) {
|
||||
ath11k_warn(ab, "could not request fw stats (%d)\n",
|
||||
ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
time_left = wait_for_completion_timeout(&ar->fw_stats_complete,
|
||||
1 * HZ);
|
||||
|
||||
if (!time_left)
|
||||
return -ETIMEDOUT;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int ath11k_mac_op_get_txpower(struct ieee80211_hw *hw,
|
||||
struct ieee80211_vif *vif,
|
||||
unsigned int link_id,
|
||||
@ -9419,7 +9464,6 @@ static int ath11k_mac_op_get_txpower(struct ieee80211_hw *hw,
|
||||
{
|
||||
struct ath11k *ar = hw->priv;
|
||||
struct ath11k_base *ab = ar->ab;
|
||||
struct stats_request_params req_param = {0};
|
||||
struct ath11k_fw_stats_pdev *pdev;
|
||||
int ret;
|
||||
|
||||
@ -9431,9 +9475,6 @@ static int ath11k_mac_op_get_txpower(struct ieee80211_hw *hw,
|
||||
*/
|
||||
mutex_lock(&ar->conf_mutex);
|
||||
|
||||
if (ar->state != ATH11K_STATE_ON)
|
||||
goto err_fallback;
|
||||
|
||||
/* Firmware doesn't provide Tx power during CAC hence no need to fetch
|
||||
* the stats.
|
||||
*/
|
||||
@ -9442,10 +9483,8 @@ static int ath11k_mac_op_get_txpower(struct ieee80211_hw *hw,
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
req_param.pdev_id = ar->pdev->pdev_id;
|
||||
req_param.stats_id = WMI_REQUEST_PDEV_STAT;
|
||||
|
||||
ret = ath11k_fw_stats_request(ar, &req_param);
|
||||
ret = ath11k_mac_get_fw_stats(ar, ar->pdev->pdev_id, 0,
|
||||
WMI_REQUEST_PDEV_STAT);
|
||||
if (ret) {
|
||||
ath11k_warn(ab, "failed to request fw pdev stats: %d\n", ret);
|
||||
goto err_fallback;
|
||||
|
@ -1,7 +1,7 @@
|
||||
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
|
||||
/*
|
||||
* Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
|
||||
* Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
|
||||
* Copyright (c) 2021-2023, 2025 Qualcomm Innovation Center, Inc. All rights reserved.
|
||||
*/
|
||||
|
||||
#ifndef ATH11K_MAC_H
|
||||
@ -179,4 +179,6 @@ int ath11k_mac_vif_set_keepalive(struct ath11k_vif *arvif,
|
||||
void ath11k_mac_fill_reg_tpc_info(struct ath11k *ar,
|
||||
struct ieee80211_vif *vif,
|
||||
struct ieee80211_chanctx_conf *ctx);
|
||||
int ath11k_mac_fw_stats_request(struct ath11k *ar,
|
||||
struct stats_request_params *req_param);
|
||||
#endif
|
||||
|
@ -8158,6 +8158,11 @@ static void ath11k_peer_assoc_conf_event(struct ath11k_base *ab, struct sk_buff
|
||||
static void ath11k_update_stats_event(struct ath11k_base *ab, struct sk_buff *skb)
|
||||
{
|
||||
struct ath11k_fw_stats stats = {};
|
||||
size_t total_vdevs_started = 0;
|
||||
struct ath11k_pdev *pdev;
|
||||
bool is_end = true;
|
||||
int i;
|
||||
|
||||
struct ath11k *ar;
|
||||
int ret;
|
||||
|
||||
@ -8184,25 +8189,57 @@ static void ath11k_update_stats_event(struct ath11k_base *ab, struct sk_buff *sk
|
||||
|
||||
spin_lock_bh(&ar->data_lock);
|
||||
|
||||
/* WMI_REQUEST_PDEV_STAT can be requested via .get_txpower mac ops or via
|
||||
/* WMI_REQUEST_PDEV_STAT, WMI_REQUEST_VDEV_STAT and
|
||||
* WMI_REQUEST_RSSI_PER_CHAIN_STAT can be requested via mac ops or via
|
||||
* debugfs fw stats. Therefore, processing it separately.
|
||||
*/
|
||||
if (stats.stats_id == WMI_REQUEST_PDEV_STAT) {
|
||||
list_splice_tail_init(&stats.pdevs, &ar->fw_stats.pdevs);
|
||||
ar->fw_stats_done = true;
|
||||
complete(&ar->fw_stats_done);
|
||||
goto complete;
|
||||
}
|
||||
|
||||
/* WMI_REQUEST_VDEV_STAT, WMI_REQUEST_BCN_STAT and WMI_REQUEST_RSSI_PER_CHAIN_STAT
|
||||
* are currently requested only via debugfs fw stats. Hence, processing these
|
||||
* in debugfs context
|
||||
if (stats.stats_id == WMI_REQUEST_RSSI_PER_CHAIN_STAT) {
|
||||
complete(&ar->fw_stats_done);
|
||||
goto complete;
|
||||
}
|
||||
|
||||
if (stats.stats_id == WMI_REQUEST_VDEV_STAT) {
|
||||
if (list_empty(&stats.vdevs)) {
|
||||
ath11k_warn(ab, "empty vdev stats");
|
||||
goto complete;
|
||||
}
|
||||
/* FW sends all the active VDEV stats irrespective of PDEV,
|
||||
* hence limit until the count of all VDEVs started
|
||||
*/
|
||||
for (i = 0; i < ab->num_radios; i++) {
|
||||
pdev = rcu_dereference(ab->pdevs_active[i]);
|
||||
if (pdev && pdev->ar)
|
||||
total_vdevs_started += ar->num_started_vdevs;
|
||||
}
|
||||
|
||||
if (total_vdevs_started)
|
||||
is_end = ((++ar->fw_stats.num_vdev_recvd) ==
|
||||
total_vdevs_started);
|
||||
|
||||
list_splice_tail_init(&stats.vdevs,
|
||||
&ar->fw_stats.vdevs);
|
||||
|
||||
if (is_end)
|
||||
complete(&ar->fw_stats_done);
|
||||
|
||||
goto complete;
|
||||
}
|
||||
|
||||
/* WMI_REQUEST_BCN_STAT is currently requested only via debugfs fw stats.
|
||||
* Hence, processing it in debugfs context
|
||||
*/
|
||||
ath11k_debugfs_fw_stats_process(ar, &stats);
|
||||
|
||||
complete:
|
||||
complete(&ar->fw_stats_complete);
|
||||
rcu_read_unlock();
|
||||
spin_unlock_bh(&ar->data_lock);
|
||||
rcu_read_unlock();
|
||||
|
||||
/* Since the stats's pdev, vdev and beacon list are spliced and reinitialised
|
||||
* at this point, no need to free the individual list.
|
||||
|
@ -2129,7 +2129,8 @@ int ath12k_core_init(struct ath12k_base *ab)
|
||||
if (!ag) {
|
||||
mutex_unlock(&ath12k_hw_group_mutex);
|
||||
ath12k_warn(ab, "unable to get hw group\n");
|
||||
return -ENODEV;
|
||||
ret = -ENODEV;
|
||||
goto err_unregister_notifier;
|
||||
}
|
||||
|
||||
mutex_unlock(&ath12k_hw_group_mutex);
|
||||
@ -2144,7 +2145,7 @@ int ath12k_core_init(struct ath12k_base *ab)
|
||||
if (ret) {
|
||||
mutex_unlock(&ag->mutex);
|
||||
ath12k_warn(ab, "unable to create hw group\n");
|
||||
goto err;
|
||||
goto err_destroy_hw_group;
|
||||
}
|
||||
}
|
||||
|
||||
@ -2152,9 +2153,12 @@ int ath12k_core_init(struct ath12k_base *ab)
|
||||
|
||||
return 0;
|
||||
|
||||
err:
|
||||
err_destroy_hw_group:
|
||||
ath12k_core_hw_group_destroy(ab->ag);
|
||||
ath12k_core_hw_group_unassign(ab);
|
||||
err_unregister_notifier:
|
||||
ath12k_core_panic_notifier_unregister(ab);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -585,7 +585,8 @@ enum hal_reo_cmd_type {
|
||||
* or cache was blocked
|
||||
* @HAL_REO_CMD_FAILED: Command execution failed, could be due to
|
||||
* invalid queue desc
|
||||
* @HAL_REO_CMD_RESOURCE_BLOCKED:
|
||||
* @HAL_REO_CMD_RESOURCE_BLOCKED: Command could not be executed because
|
||||
* one or more descriptors were blocked
|
||||
* @HAL_REO_CMD_DRAIN:
|
||||
*/
|
||||
enum hal_reo_cmd_status {
|
||||
|
@ -951,6 +951,8 @@ static const struct ath12k_hw_regs qcn9274_v1_regs = {
|
||||
.hal_umac_ce0_dest_reg_base = 0x01b81000,
|
||||
.hal_umac_ce1_src_reg_base = 0x01b82000,
|
||||
.hal_umac_ce1_dest_reg_base = 0x01b83000,
|
||||
|
||||
.gcc_gcc_pcie_hot_rst = 0x1e38338,
|
||||
};
|
||||
|
||||
static const struct ath12k_hw_regs qcn9274_v2_regs = {
|
||||
@ -1042,6 +1044,8 @@ static const struct ath12k_hw_regs qcn9274_v2_regs = {
|
||||
.hal_umac_ce0_dest_reg_base = 0x01b81000,
|
||||
.hal_umac_ce1_src_reg_base = 0x01b82000,
|
||||
.hal_umac_ce1_dest_reg_base = 0x01b83000,
|
||||
|
||||
.gcc_gcc_pcie_hot_rst = 0x1e38338,
|
||||
};
|
||||
|
||||
static const struct ath12k_hw_regs ipq5332_regs = {
|
||||
@ -1215,6 +1219,8 @@ static const struct ath12k_hw_regs wcn7850_regs = {
|
||||
.hal_umac_ce0_dest_reg_base = 0x01b81000,
|
||||
.hal_umac_ce1_src_reg_base = 0x01b82000,
|
||||
.hal_umac_ce1_dest_reg_base = 0x01b83000,
|
||||
|
||||
.gcc_gcc_pcie_hot_rst = 0x1e40304,
|
||||
};
|
||||
|
||||
static const struct ath12k_hw_hal_params ath12k_hw_hal_params_qcn9274 = {
|
||||
|
@ -375,6 +375,8 @@ struct ath12k_hw_regs {
|
||||
u32 hal_reo_cmd_ring_base;
|
||||
|
||||
u32 hal_reo_status_ring_base;
|
||||
|
||||
u32 gcc_gcc_pcie_hot_rst;
|
||||
};
|
||||
|
||||
static inline const char *ath12k_bd_ie_type_str(enum ath12k_bd_ie_type type)
|
||||
|
@ -292,10 +292,10 @@ static void ath12k_pci_enable_ltssm(struct ath12k_base *ab)
|
||||
|
||||
ath12k_dbg(ab, ATH12K_DBG_PCI, "pci ltssm 0x%x\n", val);
|
||||
|
||||
val = ath12k_pci_read32(ab, GCC_GCC_PCIE_HOT_RST);
|
||||
val = ath12k_pci_read32(ab, GCC_GCC_PCIE_HOT_RST(ab));
|
||||
val |= GCC_GCC_PCIE_HOT_RST_VAL;
|
||||
ath12k_pci_write32(ab, GCC_GCC_PCIE_HOT_RST, val);
|
||||
val = ath12k_pci_read32(ab, GCC_GCC_PCIE_HOT_RST);
|
||||
ath12k_pci_write32(ab, GCC_GCC_PCIE_HOT_RST(ab), val);
|
||||
val = ath12k_pci_read32(ab, GCC_GCC_PCIE_HOT_RST(ab));
|
||||
|
||||
ath12k_dbg(ab, ATH12K_DBG_PCI, "pci pcie_hot_rst 0x%x\n", val);
|
||||
|
||||
|
@ -28,7 +28,9 @@
|
||||
#define PCIE_PCIE_PARF_LTSSM 0x1e081b0
|
||||
#define PARM_LTSSM_VALUE 0x111
|
||||
|
||||
#define GCC_GCC_PCIE_HOT_RST 0x1e38338
|
||||
#define GCC_GCC_PCIE_HOT_RST(ab) \
|
||||
((ab)->hw_params->regs->gcc_gcc_pcie_hot_rst)
|
||||
|
||||
#define GCC_GCC_PCIE_HOT_RST_VAL 0x10
|
||||
|
||||
#define PCIE_PCIE_INT_ALL_CLEAR 0x1e08228
|
||||
|
@ -179,9 +179,11 @@ void wil_mask_irq(struct wil6210_priv *wil)
|
||||
wil_dbg_irq(wil, "mask_irq\n");
|
||||
|
||||
wil6210_mask_irq_tx(wil);
|
||||
wil6210_mask_irq_tx_edma(wil);
|
||||
if (wil->use_enhanced_dma_hw)
|
||||
wil6210_mask_irq_tx_edma(wil);
|
||||
wil6210_mask_irq_rx(wil);
|
||||
wil6210_mask_irq_rx_edma(wil);
|
||||
if (wil->use_enhanced_dma_hw)
|
||||
wil6210_mask_irq_rx_edma(wil);
|
||||
wil6210_mask_irq_misc(wil, true);
|
||||
wil6210_mask_irq_pseudo(wil);
|
||||
}
|
||||
@ -190,10 +192,12 @@ void wil_unmask_irq(struct wil6210_priv *wil)
|
||||
{
|
||||
wil_dbg_irq(wil, "unmask_irq\n");
|
||||
|
||||
wil_w(wil, RGF_DMA_EP_RX_ICR + offsetof(struct RGF_ICR, ICC),
|
||||
WIL_ICR_ICC_VALUE);
|
||||
wil_w(wil, RGF_DMA_EP_TX_ICR + offsetof(struct RGF_ICR, ICC),
|
||||
WIL_ICR_ICC_VALUE);
|
||||
if (wil->use_enhanced_dma_hw) {
|
||||
wil_w(wil, RGF_DMA_EP_RX_ICR + offsetof(struct RGF_ICR, ICC),
|
||||
WIL_ICR_ICC_VALUE);
|
||||
wil_w(wil, RGF_DMA_EP_TX_ICR + offsetof(struct RGF_ICR, ICC),
|
||||
WIL_ICR_ICC_VALUE);
|
||||
}
|
||||
wil_w(wil, RGF_DMA_EP_MISC_ICR + offsetof(struct RGF_ICR, ICC),
|
||||
WIL_ICR_ICC_MISC_VALUE);
|
||||
wil_w(wil, RGF_INT_GEN_TX_ICR + offsetof(struct RGF_ICR, ICC),
|
||||
@ -845,10 +849,12 @@ void wil6210_clear_irq(struct wil6210_priv *wil)
|
||||
offsetof(struct RGF_ICR, ICR));
|
||||
wil_clear32(wil->csr + HOSTADDR(RGF_DMA_EP_TX_ICR) +
|
||||
offsetof(struct RGF_ICR, ICR));
|
||||
wil_clear32(wil->csr + HOSTADDR(RGF_INT_GEN_RX_ICR) +
|
||||
offsetof(struct RGF_ICR, ICR));
|
||||
wil_clear32(wil->csr + HOSTADDR(RGF_INT_GEN_TX_ICR) +
|
||||
offsetof(struct RGF_ICR, ICR));
|
||||
if (wil->use_enhanced_dma_hw) {
|
||||
wil_clear32(wil->csr + HOSTADDR(RGF_INT_GEN_RX_ICR) +
|
||||
offsetof(struct RGF_ICR, ICR));
|
||||
wil_clear32(wil->csr + HOSTADDR(RGF_INT_GEN_TX_ICR) +
|
||||
offsetof(struct RGF_ICR, ICR));
|
||||
}
|
||||
wil_clear32(wil->csr + HOSTADDR(RGF_DMA_EP_MISC_ICR) +
|
||||
offsetof(struct RGF_ICR, ICR));
|
||||
wmb(); /* make sure write completed */
|
||||
|
@ -1501,11 +1501,27 @@ static int _iwl_pci_resume(struct device *device, bool restore)
|
||||
* Scratch value was altered, this means the device was powered off, we
|
||||
* need to reset it completely.
|
||||
* Note: MAC (bits 0:7) will be cleared upon suspend even with wowlan,
|
||||
* so assume that any bits there mean that the device is usable.
|
||||
* but not bits [15:8]. So if we have bits set in lower word, assume
|
||||
* the device is alive.
|
||||
* For older devices, just try silently to grab the NIC.
|
||||
*/
|
||||
if (trans->mac_cfg->device_family >= IWL_DEVICE_FAMILY_BZ &&
|
||||
!iwl_read32(trans, CSR_FUNC_SCRATCH))
|
||||
device_was_powered_off = true;
|
||||
if (trans->mac_cfg->device_family >= IWL_DEVICE_FAMILY_BZ) {
|
||||
if (!(iwl_read32(trans, CSR_FUNC_SCRATCH) &
|
||||
CSR_FUNC_SCRATCH_POWER_OFF_MASK))
|
||||
device_was_powered_off = true;
|
||||
} else {
|
||||
/*
|
||||
* bh are re-enabled by iwl_trans_pcie_release_nic_access,
|
||||
* so re-enable them if _iwl_trans_pcie_grab_nic_access fails.
|
||||
*/
|
||||
local_bh_disable();
|
||||
if (_iwl_trans_pcie_grab_nic_access(trans, true)) {
|
||||
iwl_trans_pcie_release_nic_access(trans);
|
||||
} else {
|
||||
device_was_powered_off = true;
|
||||
local_bh_enable();
|
||||
}
|
||||
}
|
||||
|
||||
if (restore || device_was_powered_off) {
|
||||
trans->state = IWL_TRANS_NO_FW;
|
||||
|
@ -403,14 +403,12 @@ mwifiex_cmd_append_11n_tlv(struct mwifiex_private *priv,
|
||||
|
||||
if (sband->ht_cap.cap & IEEE80211_HT_CAP_SUP_WIDTH_20_40 &&
|
||||
bss_desc->bcn_ht_oper->ht_param &
|
||||
IEEE80211_HT_PARAM_CHAN_WIDTH_ANY) {
|
||||
chan_list->chan_scan_param[0].radio_type |=
|
||||
CHAN_BW_40MHZ << 2;
|
||||
IEEE80211_HT_PARAM_CHAN_WIDTH_ANY)
|
||||
SET_SECONDARYCHAN(chan_list->chan_scan_param[0].
|
||||
radio_type,
|
||||
(bss_desc->bcn_ht_oper->ht_param &
|
||||
IEEE80211_HT_PARAM_CHA_SEC_OFFSET));
|
||||
}
|
||||
|
||||
*buffer += struct_size(chan_list, chan_scan_param, 1);
|
||||
ret_len += struct_size(chan_list, chan_scan_param, 1);
|
||||
}
|
||||
|
@ -98,17 +98,7 @@ static inline int queue_cnt(const struct timestamp_event_queue *q)
|
||||
/* Check if ptp virtual clock is in use */
|
||||
static inline bool ptp_vclock_in_use(struct ptp_clock *ptp)
|
||||
{
|
||||
bool in_use = false;
|
||||
|
||||
if (mutex_lock_interruptible(&ptp->n_vclocks_mux))
|
||||
return true;
|
||||
|
||||
if (!ptp->is_virtual_clock && ptp->n_vclocks)
|
||||
in_use = true;
|
||||
|
||||
mutex_unlock(&ptp->n_vclocks_mux);
|
||||
|
||||
return in_use;
|
||||
return !ptp->is_virtual_clock;
|
||||
}
|
||||
|
||||
/* Check if ptp clock shall be free running */
|
||||
|
@ -242,6 +242,7 @@ struct adv_info {
|
||||
__u8 mesh;
|
||||
__u8 instance;
|
||||
__u8 handle;
|
||||
__u8 sid;
|
||||
__u32 flags;
|
||||
__u16 timeout;
|
||||
__u16 remaining_time;
|
||||
@ -546,6 +547,7 @@ struct hci_dev {
|
||||
struct hci_conn_hash conn_hash;
|
||||
|
||||
struct list_head mesh_pending;
|
||||
struct mutex mgmt_pending_lock;
|
||||
struct list_head mgmt_pending;
|
||||
struct list_head reject_list;
|
||||
struct list_head accept_list;
|
||||
@ -1550,13 +1552,14 @@ struct hci_conn *hci_connect_sco(struct hci_dev *hdev, int type, bdaddr_t *dst,
|
||||
u16 timeout);
|
||||
struct hci_conn *hci_bind_cis(struct hci_dev *hdev, bdaddr_t *dst,
|
||||
__u8 dst_type, struct bt_iso_qos *qos);
|
||||
struct hci_conn *hci_bind_bis(struct hci_dev *hdev, bdaddr_t *dst,
|
||||
struct hci_conn *hci_bind_bis(struct hci_dev *hdev, bdaddr_t *dst, __u8 sid,
|
||||
struct bt_iso_qos *qos,
|
||||
__u8 base_len, __u8 *base);
|
||||
struct hci_conn *hci_connect_cis(struct hci_dev *hdev, bdaddr_t *dst,
|
||||
__u8 dst_type, struct bt_iso_qos *qos);
|
||||
struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst,
|
||||
__u8 dst_type, struct bt_iso_qos *qos,
|
||||
__u8 dst_type, __u8 sid,
|
||||
struct bt_iso_qos *qos,
|
||||
__u8 data_len, __u8 *data);
|
||||
struct hci_conn *hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst,
|
||||
__u8 dst_type, __u8 sid, struct bt_iso_qos *qos);
|
||||
@ -1831,6 +1834,7 @@ int hci_remove_remote_oob_data(struct hci_dev *hdev, bdaddr_t *bdaddr,
|
||||
|
||||
void hci_adv_instances_clear(struct hci_dev *hdev);
|
||||
struct adv_info *hci_find_adv_instance(struct hci_dev *hdev, u8 instance);
|
||||
struct adv_info *hci_find_adv_sid(struct hci_dev *hdev, u8 sid);
|
||||
struct adv_info *hci_get_next_instance(struct hci_dev *hdev, u8 instance);
|
||||
struct adv_info *hci_add_adv_instance(struct hci_dev *hdev, u8 instance,
|
||||
u32 flags, u16 adv_data_len, u8 *adv_data,
|
||||
@ -1838,7 +1842,7 @@ struct adv_info *hci_add_adv_instance(struct hci_dev *hdev, u8 instance,
|
||||
u16 timeout, u16 duration, s8 tx_power,
|
||||
u32 min_interval, u32 max_interval,
|
||||
u8 mesh_handle);
|
||||
struct adv_info *hci_add_per_instance(struct hci_dev *hdev, u8 instance,
|
||||
struct adv_info *hci_add_per_instance(struct hci_dev *hdev, u8 instance, u8 sid,
|
||||
u32 flags, u8 data_len, u8 *data,
|
||||
u32 min_interval, u32 max_interval);
|
||||
int hci_set_adv_instance_data(struct hci_dev *hdev, u8 instance,
|
||||
@ -2400,7 +2404,6 @@ void mgmt_advertising_added(struct sock *sk, struct hci_dev *hdev,
|
||||
u8 instance);
|
||||
void mgmt_advertising_removed(struct sock *sk, struct hci_dev *hdev,
|
||||
u8 instance);
|
||||
void mgmt_adv_monitor_removed(struct hci_dev *hdev, u16 handle);
|
||||
int mgmt_phy_configuration_changed(struct hci_dev *hdev, struct sock *skip);
|
||||
void mgmt_adv_monitor_device_lost(struct hci_dev *hdev, u16 handle,
|
||||
bdaddr_t *bdaddr, u8 addr_type);
|
||||
|
@ -115,8 +115,8 @@ int hci_enable_ext_advertising_sync(struct hci_dev *hdev, u8 instance);
|
||||
int hci_enable_advertising_sync(struct hci_dev *hdev);
|
||||
int hci_enable_advertising(struct hci_dev *hdev);
|
||||
|
||||
int hci_start_per_adv_sync(struct hci_dev *hdev, u8 instance, u8 data_len,
|
||||
u8 *data, u32 flags, u16 min_interval,
|
||||
int hci_start_per_adv_sync(struct hci_dev *hdev, u8 instance, u8 sid,
|
||||
u8 data_len, u8 *data, u32 flags, u16 min_interval,
|
||||
u16 max_interval, u16 sync_interval);
|
||||
|
||||
int hci_disable_per_advertising_sync(struct hci_dev *hdev, u8 instance);
|
||||
|
@ -973,14 +973,6 @@ static inline void qdisc_qstats_qlen_backlog(struct Qdisc *sch, __u32 *qlen,
|
||||
*backlog = qstats.backlog;
|
||||
}
|
||||
|
||||
static inline void qdisc_tree_flush_backlog(struct Qdisc *sch)
|
||||
{
|
||||
__u32 qlen, backlog;
|
||||
|
||||
qdisc_qstats_qlen_backlog(sch, &qlen, &backlog);
|
||||
qdisc_tree_reduce_backlog(sch, qlen, backlog);
|
||||
}
|
||||
|
||||
static inline void qdisc_purge_queue(struct Qdisc *sch)
|
||||
{
|
||||
__u32 qlen, backlog;
|
||||
|
@ -3010,8 +3010,11 @@ int sock_ioctl_inout(struct sock *sk, unsigned int cmd,
|
||||
int sk_ioctl(struct sock *sk, unsigned int cmd, void __user *arg);
|
||||
static inline bool sk_is_readable(struct sock *sk)
|
||||
{
|
||||
if (sk->sk_prot->sock_is_readable)
|
||||
return sk->sk_prot->sock_is_readable(sk);
|
||||
const struct proto *prot = READ_ONCE(sk->sk_prot);
|
||||
|
||||
if (prot->sock_is_readable)
|
||||
return prot->sock_is_readable(sk);
|
||||
|
||||
return false;
|
||||
}
|
||||
#endif /* _SOCK_H */
|
||||
|
@ -242,7 +242,7 @@ u8 eir_create_per_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr)
|
||||
return ad_len;
|
||||
}
|
||||
|
||||
u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr)
|
||||
u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr, u8 size)
|
||||
{
|
||||
struct adv_info *adv = NULL;
|
||||
u8 ad_len = 0, flags = 0;
|
||||
@ -286,7 +286,7 @@ u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr)
|
||||
/* If flags would still be empty, then there is no need to
|
||||
* include the "Flags" AD field".
|
||||
*/
|
||||
if (flags) {
|
||||
if (flags && (ad_len + eir_precalc_len(1) <= size)) {
|
||||
ptr[0] = 0x02;
|
||||
ptr[1] = EIR_FLAGS;
|
||||
ptr[2] = flags;
|
||||
@ -316,7 +316,8 @@ skip_flags:
|
||||
}
|
||||
|
||||
/* Provide Tx Power only if we can provide a valid value for it */
|
||||
if (adv_tx_power != HCI_TX_POWER_INVALID) {
|
||||
if (adv_tx_power != HCI_TX_POWER_INVALID &&
|
||||
(ad_len + eir_precalc_len(1) <= size)) {
|
||||
ptr[0] = 0x02;
|
||||
ptr[1] = EIR_TX_POWER;
|
||||
ptr[2] = (u8)adv_tx_power;
|
||||
@ -366,17 +367,19 @@ u8 eir_create_scan_rsp(struct hci_dev *hdev, u8 instance, u8 *ptr)
|
||||
|
||||
void *eir_get_service_data(u8 *eir, size_t eir_len, u16 uuid, size_t *len)
|
||||
{
|
||||
while ((eir = eir_get_data(eir, eir_len, EIR_SERVICE_DATA, len))) {
|
||||
size_t dlen;
|
||||
|
||||
while ((eir = eir_get_data(eir, eir_len, EIR_SERVICE_DATA, &dlen))) {
|
||||
u16 value = get_unaligned_le16(eir);
|
||||
|
||||
if (uuid == value) {
|
||||
if (len)
|
||||
*len -= 2;
|
||||
*len = dlen - 2;
|
||||
return &eir[2];
|
||||
}
|
||||
|
||||
eir += *len;
|
||||
eir_len -= *len;
|
||||
eir += dlen;
|
||||
eir_len -= dlen;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
|
@ -9,7 +9,7 @@
|
||||
|
||||
void eir_create(struct hci_dev *hdev, u8 *data);
|
||||
|
||||
u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr);
|
||||
u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr, u8 size);
|
||||
u8 eir_create_scan_rsp(struct hci_dev *hdev, u8 instance, u8 *ptr);
|
||||
u8 eir_create_per_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr);
|
||||
|
||||
|
@ -1501,8 +1501,8 @@ static int qos_set_bis(struct hci_dev *hdev, struct bt_iso_qos *qos)
|
||||
|
||||
/* This function requires the caller holds hdev->lock */
|
||||
static struct hci_conn *hci_add_bis(struct hci_dev *hdev, bdaddr_t *dst,
|
||||
struct bt_iso_qos *qos, __u8 base_len,
|
||||
__u8 *base)
|
||||
__u8 sid, struct bt_iso_qos *qos,
|
||||
__u8 base_len, __u8 *base)
|
||||
{
|
||||
struct hci_conn *conn;
|
||||
int err;
|
||||
@ -1543,6 +1543,7 @@ static struct hci_conn *hci_add_bis(struct hci_dev *hdev, bdaddr_t *dst,
|
||||
return conn;
|
||||
|
||||
conn->state = BT_CONNECT;
|
||||
conn->sid = sid;
|
||||
|
||||
hci_conn_hold(conn);
|
||||
return conn;
|
||||
@ -2062,7 +2063,8 @@ static int create_big_sync(struct hci_dev *hdev, void *data)
|
||||
if (qos->bcast.bis)
|
||||
sync_interval = interval * 4;
|
||||
|
||||
err = hci_start_per_adv_sync(hdev, qos->bcast.bis, conn->le_per_adv_data_len,
|
||||
err = hci_start_per_adv_sync(hdev, qos->bcast.bis, conn->sid,
|
||||
conn->le_per_adv_data_len,
|
||||
conn->le_per_adv_data, flags, interval,
|
||||
interval, sync_interval);
|
||||
if (err)
|
||||
@ -2134,7 +2136,7 @@ static void create_big_complete(struct hci_dev *hdev, void *data, int err)
|
||||
}
|
||||
}
|
||||
|
||||
struct hci_conn *hci_bind_bis(struct hci_dev *hdev, bdaddr_t *dst,
|
||||
struct hci_conn *hci_bind_bis(struct hci_dev *hdev, bdaddr_t *dst, __u8 sid,
|
||||
struct bt_iso_qos *qos,
|
||||
__u8 base_len, __u8 *base)
|
||||
{
|
||||
@ -2156,7 +2158,7 @@ struct hci_conn *hci_bind_bis(struct hci_dev *hdev, bdaddr_t *dst,
|
||||
base, base_len);
|
||||
|
||||
/* We need hci_conn object using the BDADDR_ANY as dst */
|
||||
conn = hci_add_bis(hdev, dst, qos, base_len, eir);
|
||||
conn = hci_add_bis(hdev, dst, sid, qos, base_len, eir);
|
||||
if (IS_ERR(conn))
|
||||
return conn;
|
||||
|
||||
@ -2207,20 +2209,35 @@ static void bis_mark_per_adv(struct hci_conn *conn, void *data)
|
||||
}
|
||||
|
||||
struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst,
|
||||
__u8 dst_type, struct bt_iso_qos *qos,
|
||||
__u8 dst_type, __u8 sid,
|
||||
struct bt_iso_qos *qos,
|
||||
__u8 base_len, __u8 *base)
|
||||
{
|
||||
struct hci_conn *conn;
|
||||
int err;
|
||||
struct iso_list_data data;
|
||||
|
||||
conn = hci_bind_bis(hdev, dst, qos, base_len, base);
|
||||
conn = hci_bind_bis(hdev, dst, sid, qos, base_len, base);
|
||||
if (IS_ERR(conn))
|
||||
return conn;
|
||||
|
||||
if (conn->state == BT_CONNECTED)
|
||||
return conn;
|
||||
|
||||
/* Check if SID needs to be allocated then search for the first
|
||||
* available.
|
||||
*/
|
||||
if (conn->sid == HCI_SID_INVALID) {
|
||||
u8 sid;
|
||||
|
||||
for (sid = 0; sid <= 0x0f; sid++) {
|
||||
if (!hci_find_adv_sid(hdev, sid)) {
|
||||
conn->sid = sid;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
data.big = qos->bcast.big;
|
||||
data.bis = qos->bcast.bis;
|
||||
|
||||
|
@ -1584,6 +1584,19 @@ struct adv_info *hci_find_adv_instance(struct hci_dev *hdev, u8 instance)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/* This function requires the caller holds hdev->lock */
|
||||
struct adv_info *hci_find_adv_sid(struct hci_dev *hdev, u8 sid)
|
||||
{
|
||||
struct adv_info *adv;
|
||||
|
||||
list_for_each_entry(adv, &hdev->adv_instances, list) {
|
||||
if (adv->sid == sid)
|
||||
return adv;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/* This function requires the caller holds hdev->lock */
|
||||
struct adv_info *hci_get_next_instance(struct hci_dev *hdev, u8 instance)
|
||||
{
|
||||
@ -1736,7 +1749,7 @@ struct adv_info *hci_add_adv_instance(struct hci_dev *hdev, u8 instance,
|
||||
}
|
||||
|
||||
/* This function requires the caller holds hdev->lock */
|
||||
struct adv_info *hci_add_per_instance(struct hci_dev *hdev, u8 instance,
|
||||
struct adv_info *hci_add_per_instance(struct hci_dev *hdev, u8 instance, u8 sid,
|
||||
u32 flags, u8 data_len, u8 *data,
|
||||
u32 min_interval, u32 max_interval)
|
||||
{
|
||||
@ -1748,6 +1761,7 @@ struct adv_info *hci_add_per_instance(struct hci_dev *hdev, u8 instance,
|
||||
if (IS_ERR(adv))
|
||||
return adv;
|
||||
|
||||
adv->sid = sid;
|
||||
adv->periodic = true;
|
||||
adv->per_adv_data_len = data_len;
|
||||
|
||||
@ -1877,10 +1891,8 @@ void hci_free_adv_monitor(struct hci_dev *hdev, struct adv_monitor *monitor)
|
||||
if (monitor->handle)
|
||||
idr_remove(&hdev->adv_monitors_idr, monitor->handle);
|
||||
|
||||
if (monitor->state != ADV_MONITOR_STATE_NOT_REGISTERED) {
|
||||
if (monitor->state != ADV_MONITOR_STATE_NOT_REGISTERED)
|
||||
hdev->adv_monitors_cnt--;
|
||||
mgmt_adv_monitor_removed(hdev, monitor->handle);
|
||||
}
|
||||
|
||||
kfree(monitor);
|
||||
}
|
||||
@ -2487,6 +2499,7 @@ struct hci_dev *hci_alloc_dev_priv(int sizeof_priv)
|
||||
|
||||
mutex_init(&hdev->lock);
|
||||
mutex_init(&hdev->req_lock);
|
||||
mutex_init(&hdev->mgmt_pending_lock);
|
||||
|
||||
ida_init(&hdev->unset_handle_ida);
|
||||
|
||||
@ -3417,23 +3430,18 @@ static void hci_link_tx_to(struct hci_dev *hdev, __u8 type)
|
||||
|
||||
bt_dev_err(hdev, "link tx timeout");
|
||||
|
||||
rcu_read_lock();
|
||||
hci_dev_lock(hdev);
|
||||
|
||||
/* Kill stalled connections */
|
||||
list_for_each_entry_rcu(c, &h->list, list) {
|
||||
list_for_each_entry(c, &h->list, list) {
|
||||
if (c->type == type && c->sent) {
|
||||
bt_dev_err(hdev, "killing stalled connection %pMR",
|
||||
&c->dst);
|
||||
/* hci_disconnect might sleep, so, we have to release
|
||||
* the RCU read lock before calling it.
|
||||
*/
|
||||
rcu_read_unlock();
|
||||
hci_disconnect(c, HCI_ERROR_REMOTE_USER_TERM);
|
||||
rcu_read_lock();
|
||||
}
|
||||
}
|
||||
|
||||
rcu_read_unlock();
|
||||
hci_dev_unlock(hdev);
|
||||
}
|
||||
|
||||
static struct hci_chan *hci_chan_sent(struct hci_dev *hdev, __u8 type,
|
||||
|
@ -1261,10 +1261,12 @@ int hci_setup_ext_adv_instance_sync(struct hci_dev *hdev, u8 instance)
|
||||
hci_cpu_to_le24(adv->min_interval, cp.min_interval);
|
||||
hci_cpu_to_le24(adv->max_interval, cp.max_interval);
|
||||
cp.tx_power = adv->tx_power;
|
||||
cp.sid = adv->sid;
|
||||
} else {
|
||||
hci_cpu_to_le24(hdev->le_adv_min_interval, cp.min_interval);
|
||||
hci_cpu_to_le24(hdev->le_adv_max_interval, cp.max_interval);
|
||||
cp.tx_power = HCI_ADV_TX_POWER_NO_PREFERENCE;
|
||||
cp.sid = 0x00;
|
||||
}
|
||||
|
||||
secondary_adv = (flags & MGMT_ADV_FLAG_SEC_MASK);
|
||||
@ -1559,7 +1561,8 @@ static int hci_enable_per_advertising_sync(struct hci_dev *hdev, u8 instance)
|
||||
static int hci_adv_bcast_annoucement(struct hci_dev *hdev, struct adv_info *adv)
|
||||
{
|
||||
u8 bid[3];
|
||||
u8 ad[4 + 3];
|
||||
u8 ad[HCI_MAX_EXT_AD_LENGTH];
|
||||
u8 len;
|
||||
|
||||
/* Skip if NULL adv as instance 0x00 is used for general purpose
|
||||
* advertising so it cannot used for the likes of Broadcast Announcement
|
||||
@ -1585,14 +1588,16 @@ static int hci_adv_bcast_annoucement(struct hci_dev *hdev, struct adv_info *adv)
|
||||
|
||||
/* Generate Broadcast ID */
|
||||
get_random_bytes(bid, sizeof(bid));
|
||||
eir_append_service_data(ad, 0, 0x1852, bid, sizeof(bid));
|
||||
hci_set_adv_instance_data(hdev, adv->instance, sizeof(ad), ad, 0, NULL);
|
||||
len = eir_append_service_data(ad, 0, 0x1852, bid, sizeof(bid));
|
||||
memcpy(ad + len, adv->adv_data, adv->adv_data_len);
|
||||
hci_set_adv_instance_data(hdev, adv->instance, len + adv->adv_data_len,
|
||||
ad, 0, NULL);
|
||||
|
||||
return hci_update_adv_data_sync(hdev, adv->instance);
|
||||
}
|
||||
|
||||
int hci_start_per_adv_sync(struct hci_dev *hdev, u8 instance, u8 data_len,
|
||||
u8 *data, u32 flags, u16 min_interval,
|
||||
int hci_start_per_adv_sync(struct hci_dev *hdev, u8 instance, u8 sid,
|
||||
u8 data_len, u8 *data, u32 flags, u16 min_interval,
|
||||
u16 max_interval, u16 sync_interval)
|
||||
{
|
||||
struct adv_info *adv = NULL;
|
||||
@ -1603,9 +1608,28 @@ int hci_start_per_adv_sync(struct hci_dev *hdev, u8 instance, u8 data_len,
|
||||
|
||||
if (instance) {
|
||||
adv = hci_find_adv_instance(hdev, instance);
|
||||
/* Create an instance if that could not be found */
|
||||
if (!adv) {
|
||||
adv = hci_add_per_instance(hdev, instance, flags,
|
||||
if (adv) {
|
||||
if (sid != HCI_SID_INVALID && adv->sid != sid) {
|
||||
/* If the SID don't match attempt to find by
|
||||
* SID.
|
||||
*/
|
||||
adv = hci_find_adv_sid(hdev, sid);
|
||||
if (!adv) {
|
||||
bt_dev_err(hdev,
|
||||
"Unable to find adv_info");
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
/* Turn it into periodic advertising */
|
||||
adv->periodic = true;
|
||||
adv->per_adv_data_len = data_len;
|
||||
if (data)
|
||||
memcpy(adv->per_adv_data, data, data_len);
|
||||
adv->flags = flags;
|
||||
} else if (!adv) {
|
||||
/* Create an instance if that could not be found */
|
||||
adv = hci_add_per_instance(hdev, instance, sid, flags,
|
||||
data_len, data,
|
||||
sync_interval,
|
||||
sync_interval);
|
||||
@ -1812,7 +1836,8 @@ static int hci_set_ext_adv_data_sync(struct hci_dev *hdev, u8 instance)
|
||||
return 0;
|
||||
}
|
||||
|
||||
len = eir_create_adv_data(hdev, instance, pdu->data);
|
||||
len = eir_create_adv_data(hdev, instance, pdu->data,
|
||||
HCI_MAX_EXT_AD_LENGTH);
|
||||
|
||||
pdu->length = len;
|
||||
pdu->handle = adv ? adv->handle : instance;
|
||||
@ -1843,7 +1868,7 @@ static int hci_set_adv_data_sync(struct hci_dev *hdev, u8 instance)
|
||||
|
||||
memset(&cp, 0, sizeof(cp));
|
||||
|
||||
len = eir_create_adv_data(hdev, instance, cp.data);
|
||||
len = eir_create_adv_data(hdev, instance, cp.data, sizeof(cp.data));
|
||||
|
||||
/* There's nothing to do if the data hasn't changed */
|
||||
if (hdev->adv_data_len == len &&
|
||||
|
@ -336,7 +336,7 @@ static int iso_connect_bis(struct sock *sk)
|
||||
struct hci_dev *hdev;
|
||||
int err;
|
||||
|
||||
BT_DBG("%pMR", &iso_pi(sk)->src);
|
||||
BT_DBG("%pMR (SID 0x%2.2x)", &iso_pi(sk)->src, iso_pi(sk)->bc_sid);
|
||||
|
||||
hdev = hci_get_route(&iso_pi(sk)->dst, &iso_pi(sk)->src,
|
||||
iso_pi(sk)->src_type);
|
||||
@ -365,7 +365,7 @@ static int iso_connect_bis(struct sock *sk)
|
||||
|
||||
/* Just bind if DEFER_SETUP has been set */
|
||||
if (test_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags)) {
|
||||
hcon = hci_bind_bis(hdev, &iso_pi(sk)->dst,
|
||||
hcon = hci_bind_bis(hdev, &iso_pi(sk)->dst, iso_pi(sk)->bc_sid,
|
||||
&iso_pi(sk)->qos, iso_pi(sk)->base_len,
|
||||
iso_pi(sk)->base);
|
||||
if (IS_ERR(hcon)) {
|
||||
@ -375,12 +375,16 @@ static int iso_connect_bis(struct sock *sk)
|
||||
} else {
|
||||
hcon = hci_connect_bis(hdev, &iso_pi(sk)->dst,
|
||||
le_addr_type(iso_pi(sk)->dst_type),
|
||||
&iso_pi(sk)->qos, iso_pi(sk)->base_len,
|
||||
iso_pi(sk)->base);
|
||||
iso_pi(sk)->bc_sid, &iso_pi(sk)->qos,
|
||||
iso_pi(sk)->base_len, iso_pi(sk)->base);
|
||||
if (IS_ERR(hcon)) {
|
||||
err = PTR_ERR(hcon);
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
/* Update SID if it was not set */
|
||||
if (iso_pi(sk)->bc_sid == HCI_SID_INVALID)
|
||||
iso_pi(sk)->bc_sid = hcon->sid;
|
||||
}
|
||||
|
||||
conn = iso_conn_add(hcon);
|
||||
@ -1337,10 +1341,13 @@ static int iso_sock_getname(struct socket *sock, struct sockaddr *addr,
|
||||
addr->sa_family = AF_BLUETOOTH;
|
||||
|
||||
if (peer) {
|
||||
struct hci_conn *hcon = iso_pi(sk)->conn ?
|
||||
iso_pi(sk)->conn->hcon : NULL;
|
||||
|
||||
bacpy(&sa->iso_bdaddr, &iso_pi(sk)->dst);
|
||||
sa->iso_bdaddr_type = iso_pi(sk)->dst_type;
|
||||
|
||||
if (test_bit(BT_SK_PA_SYNC, &iso_pi(sk)->flags)) {
|
||||
if (hcon && hcon->type == BIS_LINK) {
|
||||
sa->iso_bc->bc_sid = iso_pi(sk)->bc_sid;
|
||||
sa->iso_bc->bc_num_bis = iso_pi(sk)->bc_num_bis;
|
||||
memcpy(sa->iso_bc->bc_bis, iso_pi(sk)->bc_bis,
|
||||
|
@ -1447,22 +1447,17 @@ static void settings_rsp(struct mgmt_pending_cmd *cmd, void *data)
|
||||
|
||||
send_settings_rsp(cmd->sk, cmd->opcode, match->hdev);
|
||||
|
||||
list_del(&cmd->list);
|
||||
|
||||
if (match->sk == NULL) {
|
||||
match->sk = cmd->sk;
|
||||
sock_hold(match->sk);
|
||||
}
|
||||
|
||||
mgmt_pending_free(cmd);
|
||||
}
|
||||
|
||||
static void cmd_status_rsp(struct mgmt_pending_cmd *cmd, void *data)
|
||||
{
|
||||
u8 *status = data;
|
||||
|
||||
mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, *status);
|
||||
mgmt_pending_remove(cmd);
|
||||
mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, *status);
|
||||
}
|
||||
|
||||
static void cmd_complete_rsp(struct mgmt_pending_cmd *cmd, void *data)
|
||||
@ -1476,8 +1471,6 @@ static void cmd_complete_rsp(struct mgmt_pending_cmd *cmd, void *data)
|
||||
|
||||
if (cmd->cmd_complete) {
|
||||
cmd->cmd_complete(cmd, match->mgmt_status);
|
||||
mgmt_pending_remove(cmd);
|
||||
|
||||
return;
|
||||
}
|
||||
|
||||
@ -1486,13 +1479,13 @@ static void cmd_complete_rsp(struct mgmt_pending_cmd *cmd, void *data)
|
||||
|
||||
static int generic_cmd_complete(struct mgmt_pending_cmd *cmd, u8 status)
|
||||
{
|
||||
return mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, status,
|
||||
return mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, status,
|
||||
cmd->param, cmd->param_len);
|
||||
}
|
||||
|
||||
static int addr_cmd_complete(struct mgmt_pending_cmd *cmd, u8 status)
|
||||
{
|
||||
return mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, status,
|
||||
return mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, status,
|
||||
cmd->param, sizeof(struct mgmt_addr_info));
|
||||
}
|
||||
|
||||
@ -1532,7 +1525,7 @@ static void mgmt_set_discoverable_complete(struct hci_dev *hdev, void *data,
|
||||
|
||||
if (err) {
|
||||
u8 mgmt_err = mgmt_status(err);
|
||||
mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, mgmt_err);
|
||||
mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_err);
|
||||
hci_dev_clear_flag(hdev, HCI_LIMITED_DISCOVERABLE);
|
||||
goto done;
|
||||
}
|
||||
@ -1707,7 +1700,7 @@ static void mgmt_set_connectable_complete(struct hci_dev *hdev, void *data,
|
||||
|
||||
if (err) {
|
||||
u8 mgmt_err = mgmt_status(err);
|
||||
mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, mgmt_err);
|
||||
mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_err);
|
||||
goto done;
|
||||
}
|
||||
|
||||
@ -1943,8 +1936,8 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
|
||||
new_settings(hdev, NULL);
|
||||
}
|
||||
|
||||
mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, cmd_status_rsp,
|
||||
&mgmt_err);
|
||||
mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, true,
|
||||
cmd_status_rsp, &mgmt_err);
|
||||
return;
|
||||
}
|
||||
|
||||
@ -1954,7 +1947,7 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
|
||||
changed = hci_dev_test_and_clear_flag(hdev, HCI_SSP_ENABLED);
|
||||
}
|
||||
|
||||
mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, settings_rsp, &match);
|
||||
mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, true, settings_rsp, &match);
|
||||
|
||||
if (changed)
|
||||
new_settings(hdev, match.sk);
|
||||
@ -2074,12 +2067,12 @@ static void set_le_complete(struct hci_dev *hdev, void *data, int err)
|
||||
bt_dev_dbg(hdev, "err %d", err);
|
||||
|
||||
if (status) {
|
||||
mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, cmd_status_rsp,
|
||||
&status);
|
||||
mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, true, cmd_status_rsp,
|
||||
&status);
|
||||
return;
|
||||
}
|
||||
|
||||
mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, settings_rsp, &match);
|
||||
mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, true, settings_rsp, &match);
|
||||
|
||||
new_settings(hdev, match.sk);
|
||||
|
||||
@ -2138,7 +2131,7 @@ static void set_mesh_complete(struct hci_dev *hdev, void *data, int err)
|
||||
struct sock *sk = cmd->sk;
|
||||
|
||||
if (status) {
|
||||
mgmt_pending_foreach(MGMT_OP_SET_MESH_RECEIVER, hdev,
|
||||
mgmt_pending_foreach(MGMT_OP_SET_MESH_RECEIVER, hdev, true,
|
||||
cmd_status_rsp, &status);
|
||||
return;
|
||||
}
|
||||
@ -2638,7 +2631,7 @@ static void mgmt_class_complete(struct hci_dev *hdev, void *data, int err)
|
||||
|
||||
bt_dev_dbg(hdev, "err %d", err);
|
||||
|
||||
mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode,
|
||||
mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode,
|
||||
mgmt_status(err), hdev->dev_class, 3);
|
||||
|
||||
mgmt_pending_free(cmd);
|
||||
@ -3427,7 +3420,7 @@ static int pairing_complete(struct mgmt_pending_cmd *cmd, u8 status)
|
||||
bacpy(&rp.addr.bdaddr, &conn->dst);
|
||||
rp.addr.type = link_to_bdaddr(conn->type, conn->dst_type);
|
||||
|
||||
err = mgmt_cmd_complete(cmd->sk, cmd->index, MGMT_OP_PAIR_DEVICE,
|
||||
err = mgmt_cmd_complete(cmd->sk, cmd->hdev->id, MGMT_OP_PAIR_DEVICE,
|
||||
status, &rp, sizeof(rp));
|
||||
|
||||
/* So we don't get further callbacks for this connection */
|
||||
@ -5108,24 +5101,14 @@ static void mgmt_adv_monitor_added(struct sock *sk, struct hci_dev *hdev,
|
||||
mgmt_event(MGMT_EV_ADV_MONITOR_ADDED, hdev, &ev, sizeof(ev), sk);
|
||||
}
|
||||
|
||||
void mgmt_adv_monitor_removed(struct hci_dev *hdev, u16 handle)
|
||||
static void mgmt_adv_monitor_removed(struct sock *sk, struct hci_dev *hdev,
|
||||
__le16 handle)
|
||||
{
|
||||
struct mgmt_ev_adv_monitor_removed ev;
|
||||
struct mgmt_pending_cmd *cmd;
|
||||
struct sock *sk_skip = NULL;
|
||||
struct mgmt_cp_remove_adv_monitor *cp;
|
||||
|
||||
cmd = pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev);
|
||||
if (cmd) {
|
||||
cp = cmd->param;
|
||||
ev.monitor_handle = handle;
|
||||
|
||||
if (cp->monitor_handle)
|
||||
sk_skip = cmd->sk;
|
||||
}
|
||||
|
||||
ev.monitor_handle = cpu_to_le16(handle);
|
||||
|
||||
mgmt_event(MGMT_EV_ADV_MONITOR_REMOVED, hdev, &ev, sizeof(ev), sk_skip);
|
||||
mgmt_event(MGMT_EV_ADV_MONITOR_REMOVED, hdev, &ev, sizeof(ev), sk);
|
||||
}
|
||||
|
||||
static int read_adv_mon_features(struct sock *sk, struct hci_dev *hdev,
|
||||
@ -5196,7 +5179,7 @@ static void mgmt_add_adv_patterns_monitor_complete(struct hci_dev *hdev,
|
||||
hci_update_passive_scan(hdev);
|
||||
}
|
||||
|
||||
mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode,
|
||||
mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode,
|
||||
mgmt_status(status), &rp, sizeof(rp));
|
||||
mgmt_pending_remove(cmd);
|
||||
|
||||
@ -5227,8 +5210,7 @@ static int __add_adv_patterns_monitor(struct sock *sk, struct hci_dev *hdev,
|
||||
|
||||
if (pending_find(MGMT_OP_SET_LE, hdev) ||
|
||||
pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR, hdev) ||
|
||||
pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR_RSSI, hdev) ||
|
||||
pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev)) {
|
||||
pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR_RSSI, hdev)) {
|
||||
status = MGMT_STATUS_BUSY;
|
||||
goto unlock;
|
||||
}
|
||||
@ -5398,8 +5380,7 @@ static void mgmt_remove_adv_monitor_complete(struct hci_dev *hdev,
|
||||
struct mgmt_pending_cmd *cmd = data;
|
||||
struct mgmt_cp_remove_adv_monitor *cp;
|
||||
|
||||
if (status == -ECANCELED ||
|
||||
cmd != pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev))
|
||||
if (status == -ECANCELED)
|
||||
return;
|
||||
|
||||
hci_dev_lock(hdev);
|
||||
@ -5408,12 +5389,14 @@ static void mgmt_remove_adv_monitor_complete(struct hci_dev *hdev,
|
||||
|
||||
rp.monitor_handle = cp->monitor_handle;
|
||||
|
||||
if (!status)
|
||||
if (!status) {
|
||||
mgmt_adv_monitor_removed(cmd->sk, hdev, cp->monitor_handle);
|
||||
hci_update_passive_scan(hdev);
|
||||
}
|
||||
|
||||
mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode,
|
||||
mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode,
|
||||
mgmt_status(status), &rp, sizeof(rp));
|
||||
mgmt_pending_remove(cmd);
|
||||
mgmt_pending_free(cmd);
|
||||
|
||||
hci_dev_unlock(hdev);
|
||||
bt_dev_dbg(hdev, "remove monitor %d complete, status %d",
|
||||
@ -5423,10 +5406,6 @@ static void mgmt_remove_adv_monitor_complete(struct hci_dev *hdev,
|
||||
static int mgmt_remove_adv_monitor_sync(struct hci_dev *hdev, void *data)
|
||||
{
|
||||
struct mgmt_pending_cmd *cmd = data;
|
||||
|
||||
if (cmd != pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev))
|
||||
return -ECANCELED;
|
||||
|
||||
struct mgmt_cp_remove_adv_monitor *cp = cmd->param;
|
||||
u16 handle = __le16_to_cpu(cp->monitor_handle);
|
||||
|
||||
@ -5445,14 +5424,13 @@ static int remove_adv_monitor(struct sock *sk, struct hci_dev *hdev,
|
||||
hci_dev_lock(hdev);
|
||||
|
||||
if (pending_find(MGMT_OP_SET_LE, hdev) ||
|
||||
pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev) ||
|
||||
pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR, hdev) ||
|
||||
pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR_RSSI, hdev)) {
|
||||
status = MGMT_STATUS_BUSY;
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
cmd = mgmt_pending_add(sk, MGMT_OP_REMOVE_ADV_MONITOR, hdev, data, len);
|
||||
cmd = mgmt_pending_new(sk, MGMT_OP_REMOVE_ADV_MONITOR, hdev, data, len);
|
||||
if (!cmd) {
|
||||
status = MGMT_STATUS_NO_RESOURCES;
|
||||
goto unlock;
|
||||
@ -5462,7 +5440,7 @@ static int remove_adv_monitor(struct sock *sk, struct hci_dev *hdev,
|
||||
mgmt_remove_adv_monitor_complete);
|
||||
|
||||
if (err) {
|
||||
mgmt_pending_remove(cmd);
|
||||
mgmt_pending_free(cmd);
|
||||
|
||||
if (err == -ENOMEM)
|
||||
status = MGMT_STATUS_NO_RESOURCES;
|
||||
@ -5792,7 +5770,7 @@ static void start_discovery_complete(struct hci_dev *hdev, void *data, int err)
|
||||
cmd != pending_find(MGMT_OP_START_SERVICE_DISCOVERY, hdev))
|
||||
return;
|
||||
|
||||
mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, mgmt_status(err),
|
||||
mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_status(err),
|
||||
cmd->param, 1);
|
||||
mgmt_pending_remove(cmd);
|
||||
|
||||
@ -6013,7 +5991,7 @@ static void stop_discovery_complete(struct hci_dev *hdev, void *data, int err)
|
||||
|
||||
bt_dev_dbg(hdev, "err %d", err);
|
||||
|
||||
mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, mgmt_status(err),
|
||||
mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_status(err),
|
||||
cmd->param, 1);
|
||||
mgmt_pending_remove(cmd);
|
||||
|
||||
@ -6238,7 +6216,7 @@ static void set_advertising_complete(struct hci_dev *hdev, void *data, int err)
|
||||
u8 status = mgmt_status(err);
|
||||
|
||||
if (status) {
|
||||
mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev,
|
||||
mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev, true,
|
||||
cmd_status_rsp, &status);
|
||||
return;
|
||||
}
|
||||
@ -6248,7 +6226,7 @@ static void set_advertising_complete(struct hci_dev *hdev, void *data, int err)
|
||||
else
|
||||
hci_dev_clear_flag(hdev, HCI_ADVERTISING);
|
||||
|
||||
mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev, settings_rsp,
|
||||
mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev, true, settings_rsp,
|
||||
&match);
|
||||
|
||||
new_settings(hdev, match.sk);
|
||||
@ -6592,7 +6570,7 @@ static void set_bredr_complete(struct hci_dev *hdev, void *data, int err)
|
||||
*/
|
||||
hci_dev_clear_flag(hdev, HCI_BREDR_ENABLED);
|
||||
|
||||
mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, mgmt_err);
|
||||
mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_err);
|
||||
} else {
|
||||
send_settings_rsp(cmd->sk, MGMT_OP_SET_BREDR, hdev);
|
||||
new_settings(hdev, cmd->sk);
|
||||
@ -6729,7 +6707,7 @@ static void set_secure_conn_complete(struct hci_dev *hdev, void *data, int err)
|
||||
if (err) {
|
||||
u8 mgmt_err = mgmt_status(err);
|
||||
|
||||
mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, mgmt_err);
|
||||
mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_err);
|
||||
goto done;
|
||||
}
|
||||
|
||||
@ -7176,7 +7154,7 @@ static void get_conn_info_complete(struct hci_dev *hdev, void *data, int err)
|
||||
rp.max_tx_power = HCI_TX_POWER_INVALID;
|
||||
}
|
||||
|
||||
mgmt_cmd_complete(cmd->sk, cmd->index, MGMT_OP_GET_CONN_INFO, status,
|
||||
mgmt_cmd_complete(cmd->sk, cmd->hdev->id, MGMT_OP_GET_CONN_INFO, status,
|
||||
&rp, sizeof(rp));
|
||||
|
||||
mgmt_pending_free(cmd);
|
||||
@ -7336,7 +7314,7 @@ static void get_clock_info_complete(struct hci_dev *hdev, void *data, int err)
|
||||
}
|
||||
|
||||
complete:
|
||||
mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, status, &rp,
|
||||
mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, status, &rp,
|
||||
sizeof(rp));
|
||||
|
||||
mgmt_pending_free(cmd);
|
||||
@ -8586,10 +8564,10 @@ static void add_advertising_complete(struct hci_dev *hdev, void *data, int err)
|
||||
rp.instance = cp->instance;
|
||||
|
||||
if (err)
|
||||
mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode,
|
||||
mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode,
|
||||
mgmt_status(err));
|
||||
else
|
||||
mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode,
|
||||
mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode,
|
||||
mgmt_status(err), &rp, sizeof(rp));
|
||||
|
||||
add_adv_complete(hdev, cmd->sk, cp->instance, err);
|
||||
@ -8777,10 +8755,10 @@ static void add_ext_adv_params_complete(struct hci_dev *hdev, void *data,
|
||||
|
||||
hci_remove_adv_instance(hdev, cp->instance);
|
||||
|
||||
mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode,
|
||||
mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode,
|
||||
mgmt_status(err));
|
||||
} else {
|
||||
mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode,
|
||||
mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode,
|
||||
mgmt_status(err), &rp, sizeof(rp));
|
||||
}
|
||||
|
||||
@ -8927,10 +8905,10 @@ static void add_ext_adv_data_complete(struct hci_dev *hdev, void *data, int err)
|
||||
rp.instance = cp->instance;
|
||||
|
||||
if (err)
|
||||
mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode,
|
||||
mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode,
|
||||
mgmt_status(err));
|
||||
else
|
||||
mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode,
|
||||
mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode,
|
||||
mgmt_status(err), &rp, sizeof(rp));
|
||||
|
||||
mgmt_pending_free(cmd);
|
||||
@ -9089,10 +9067,10 @@ static void remove_advertising_complete(struct hci_dev *hdev, void *data,
|
||||
rp.instance = cp->instance;
|
||||
|
||||
if (err)
|
||||
mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode,
|
||||
mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode,
|
||||
mgmt_status(err));
|
||||
else
|
||||
mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode,
|
||||
mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode,
|
||||
MGMT_STATUS_SUCCESS, &rp, sizeof(rp));
|
||||
|
||||
mgmt_pending_free(cmd);
|
||||
@ -9364,7 +9342,7 @@ void mgmt_index_removed(struct hci_dev *hdev)
|
||||
if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks))
|
||||
return;
|
||||
|
||||
mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &match);
|
||||
mgmt_pending_foreach(0, hdev, true, cmd_complete_rsp, &match);
|
||||
|
||||
if (hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) {
|
||||
mgmt_index_event(MGMT_EV_UNCONF_INDEX_REMOVED, hdev, NULL, 0,
|
||||
@ -9402,7 +9380,8 @@ void mgmt_power_on(struct hci_dev *hdev, int err)
|
||||
hci_update_passive_scan(hdev);
|
||||
}
|
||||
|
||||
mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, settings_rsp, &match);
|
||||
mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, true, settings_rsp,
|
||||
&match);
|
||||
|
||||
new_settings(hdev, match.sk);
|
||||
|
||||
@ -9417,7 +9396,8 @@ void __mgmt_power_off(struct hci_dev *hdev)
|
||||
struct cmd_lookup match = { NULL, hdev };
|
||||
u8 zero_cod[] = { 0, 0, 0 };
|
||||
|
||||
mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, settings_rsp, &match);
|
||||
mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, true, settings_rsp,
|
||||
&match);
|
||||
|
||||
/* If the power off is because of hdev unregistration let
|
||||
* use the appropriate INVALID_INDEX status. Otherwise use
|
||||
@ -9431,7 +9411,7 @@ void __mgmt_power_off(struct hci_dev *hdev)
|
||||
else
|
||||
match.mgmt_status = MGMT_STATUS_NOT_POWERED;
|
||||
|
||||
mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &match);
|
||||
mgmt_pending_foreach(0, hdev, true, cmd_complete_rsp, &match);
|
||||
|
||||
if (memcmp(hdev->dev_class, zero_cod, sizeof(zero_cod)) != 0) {
|
||||
mgmt_limited_event(MGMT_EV_CLASS_OF_DEV_CHANGED, hdev,
|
||||
@ -9672,7 +9652,6 @@ static void unpair_device_rsp(struct mgmt_pending_cmd *cmd, void *data)
|
||||
device_unpaired(hdev, &cp->addr.bdaddr, cp->addr.type, cmd->sk);
|
||||
|
||||
cmd->cmd_complete(cmd, 0);
|
||||
mgmt_pending_remove(cmd);
|
||||
}
|
||||
|
||||
bool mgmt_powering_down(struct hci_dev *hdev)
|
||||
@ -9728,8 +9707,8 @@ void mgmt_disconnect_failed(struct hci_dev *hdev, bdaddr_t *bdaddr,
|
||||
struct mgmt_cp_disconnect *cp;
|
||||
struct mgmt_pending_cmd *cmd;
|
||||
|
||||
mgmt_pending_foreach(MGMT_OP_UNPAIR_DEVICE, hdev, unpair_device_rsp,
|
||||
hdev);
|
||||
mgmt_pending_foreach(MGMT_OP_UNPAIR_DEVICE, hdev, true,
|
||||
unpair_device_rsp, hdev);
|
||||
|
||||
cmd = pending_find(MGMT_OP_DISCONNECT, hdev);
|
||||
if (!cmd)
|
||||
@ -9922,7 +9901,7 @@ void mgmt_auth_enable_complete(struct hci_dev *hdev, u8 status)
|
||||
|
||||
if (status) {
|
||||
u8 mgmt_err = mgmt_status(status);
|
||||
mgmt_pending_foreach(MGMT_OP_SET_LINK_SECURITY, hdev,
|
||||
mgmt_pending_foreach(MGMT_OP_SET_LINK_SECURITY, hdev, true,
|
||||
cmd_status_rsp, &mgmt_err);
|
||||
return;
|
||||
}
|
||||
@ -9932,8 +9911,8 @@ void mgmt_auth_enable_complete(struct hci_dev *hdev, u8 status)
|
||||
else
|
||||
changed = hci_dev_test_and_clear_flag(hdev, HCI_LINK_SECURITY);
|
||||
|
||||
mgmt_pending_foreach(MGMT_OP_SET_LINK_SECURITY, hdev, settings_rsp,
|
||||
&match);
|
||||
mgmt_pending_foreach(MGMT_OP_SET_LINK_SECURITY, hdev, true,
|
||||
settings_rsp, &match);
|
||||
|
||||
if (changed)
|
||||
new_settings(hdev, match.sk);
|
||||
@ -9957,9 +9936,12 @@ void mgmt_set_class_of_dev_complete(struct hci_dev *hdev, u8 *dev_class,
|
||||
{
|
||||
struct cmd_lookup match = { NULL, hdev, mgmt_status(status) };
|
||||
|
||||
mgmt_pending_foreach(MGMT_OP_SET_DEV_CLASS, hdev, sk_lookup, &match);
|
||||
mgmt_pending_foreach(MGMT_OP_ADD_UUID, hdev, sk_lookup, &match);
|
||||
mgmt_pending_foreach(MGMT_OP_REMOVE_UUID, hdev, sk_lookup, &match);
|
||||
mgmt_pending_foreach(MGMT_OP_SET_DEV_CLASS, hdev, false, sk_lookup,
|
||||
&match);
|
||||
mgmt_pending_foreach(MGMT_OP_ADD_UUID, hdev, false, sk_lookup,
|
||||
&match);
|
||||
mgmt_pending_foreach(MGMT_OP_REMOVE_UUID, hdev, false, sk_lookup,
|
||||
&match);
|
||||
|
||||
if (!status) {
|
||||
mgmt_limited_event(MGMT_EV_CLASS_OF_DEV_CHANGED, hdev, dev_class,
|
||||
|
@ -217,30 +217,47 @@ int mgmt_cmd_complete(struct sock *sk, u16 index, u16 cmd, u8 status,
|
||||
struct mgmt_pending_cmd *mgmt_pending_find(unsigned short channel, u16 opcode,
|
||||
struct hci_dev *hdev)
|
||||
{
|
||||
struct mgmt_pending_cmd *cmd;
|
||||
struct mgmt_pending_cmd *cmd, *tmp;
|
||||
|
||||
list_for_each_entry(cmd, &hdev->mgmt_pending, list) {
|
||||
mutex_lock(&hdev->mgmt_pending_lock);
|
||||
|
||||
list_for_each_entry_safe(cmd, tmp, &hdev->mgmt_pending, list) {
|
||||
if (hci_sock_get_channel(cmd->sk) != channel)
|
||||
continue;
|
||||
if (cmd->opcode == opcode)
|
||||
|
||||
if (cmd->opcode == opcode) {
|
||||
mutex_unlock(&hdev->mgmt_pending_lock);
|
||||
return cmd;
|
||||
}
|
||||
}
|
||||
|
||||
mutex_unlock(&hdev->mgmt_pending_lock);
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
void mgmt_pending_foreach(u16 opcode, struct hci_dev *hdev,
|
||||
void mgmt_pending_foreach(u16 opcode, struct hci_dev *hdev, bool remove,
|
||||
void (*cb)(struct mgmt_pending_cmd *cmd, void *data),
|
||||
void *data)
|
||||
{
|
||||
struct mgmt_pending_cmd *cmd, *tmp;
|
||||
|
||||
mutex_lock(&hdev->mgmt_pending_lock);
|
||||
|
||||
list_for_each_entry_safe(cmd, tmp, &hdev->mgmt_pending, list) {
|
||||
if (opcode > 0 && cmd->opcode != opcode)
|
||||
continue;
|
||||
|
||||
if (remove)
|
||||
list_del(&cmd->list);
|
||||
|
||||
cb(cmd, data);
|
||||
|
||||
if (remove)
|
||||
mgmt_pending_free(cmd);
|
||||
}
|
||||
|
||||
mutex_unlock(&hdev->mgmt_pending_lock);
|
||||
}
|
||||
|
||||
struct mgmt_pending_cmd *mgmt_pending_new(struct sock *sk, u16 opcode,
|
||||
@ -254,7 +271,7 @@ struct mgmt_pending_cmd *mgmt_pending_new(struct sock *sk, u16 opcode,
|
||||
return NULL;
|
||||
|
||||
cmd->opcode = opcode;
|
||||
cmd->index = hdev->id;
|
||||
cmd->hdev = hdev;
|
||||
|
||||
cmd->param = kmemdup(data, len, GFP_KERNEL);
|
||||
if (!cmd->param) {
|
||||
@ -280,7 +297,9 @@ struct mgmt_pending_cmd *mgmt_pending_add(struct sock *sk, u16 opcode,
|
||||
if (!cmd)
|
||||
return NULL;
|
||||
|
||||
mutex_lock(&hdev->mgmt_pending_lock);
|
||||
list_add_tail(&cmd->list, &hdev->mgmt_pending);
|
||||
mutex_unlock(&hdev->mgmt_pending_lock);
|
||||
|
||||
return cmd;
|
||||
}
|
||||
@ -294,7 +313,10 @@ void mgmt_pending_free(struct mgmt_pending_cmd *cmd)
|
||||
|
||||
void mgmt_pending_remove(struct mgmt_pending_cmd *cmd)
|
||||
{
|
||||
mutex_lock(&cmd->hdev->mgmt_pending_lock);
|
||||
list_del(&cmd->list);
|
||||
mutex_unlock(&cmd->hdev->mgmt_pending_lock);
|
||||
|
||||
mgmt_pending_free(cmd);
|
||||
}
|
||||
|
||||
|
@ -33,7 +33,7 @@ struct mgmt_mesh_tx {
|
||||
struct mgmt_pending_cmd {
|
||||
struct list_head list;
|
||||
u16 opcode;
|
||||
int index;
|
||||
struct hci_dev *hdev;
|
||||
void *param;
|
||||
size_t param_len;
|
||||
struct sock *sk;
|
||||
@ -54,7 +54,7 @@ int mgmt_cmd_complete(struct sock *sk, u16 index, u16 cmd, u8 status,
|
||||
|
||||
struct mgmt_pending_cmd *mgmt_pending_find(unsigned short channel, u16 opcode,
|
||||
struct hci_dev *hdev);
|
||||
void mgmt_pending_foreach(u16 opcode, struct hci_dev *hdev,
|
||||
void mgmt_pending_foreach(u16 opcode, struct hci_dev *hdev, bool remove,
|
||||
void (*cb)(struct mgmt_pending_cmd *cmd, void *data),
|
||||
void *data);
|
||||
struct mgmt_pending_cmd *mgmt_pending_add(struct sock *sk, u16 opcode,
|
||||
|
@ -3233,6 +3233,13 @@ static const struct bpf_func_proto bpf_skb_vlan_pop_proto = {
|
||||
.arg1_type = ARG_PTR_TO_CTX,
|
||||
};
|
||||
|
||||
static void bpf_skb_change_protocol(struct sk_buff *skb, u16 proto)
|
||||
{
|
||||
skb->protocol = htons(proto);
|
||||
if (skb_valid_dst(skb))
|
||||
skb_dst_drop(skb);
|
||||
}
|
||||
|
||||
static int bpf_skb_generic_push(struct sk_buff *skb, u32 off, u32 len)
|
||||
{
|
||||
/* Caller already did skb_cow() with len as headroom,
|
||||
@ -3329,7 +3336,7 @@ static int bpf_skb_proto_4_to_6(struct sk_buff *skb)
|
||||
}
|
||||
}
|
||||
|
||||
skb->protocol = htons(ETH_P_IPV6);
|
||||
bpf_skb_change_protocol(skb, ETH_P_IPV6);
|
||||
skb_clear_hash(skb);
|
||||
|
||||
return 0;
|
||||
@ -3359,7 +3366,7 @@ static int bpf_skb_proto_6_to_4(struct sk_buff *skb)
|
||||
}
|
||||
}
|
||||
|
||||
skb->protocol = htons(ETH_P_IP);
|
||||
bpf_skb_change_protocol(skb, ETH_P_IP);
|
||||
skb_clear_hash(skb);
|
||||
|
||||
return 0;
|
||||
@ -3550,10 +3557,10 @@ static int bpf_skb_net_grow(struct sk_buff *skb, u32 off, u32 len_diff,
|
||||
/* Match skb->protocol to new outer l3 protocol */
|
||||
if (skb->protocol == htons(ETH_P_IP) &&
|
||||
flags & BPF_F_ADJ_ROOM_ENCAP_L3_IPV6)
|
||||
skb->protocol = htons(ETH_P_IPV6);
|
||||
bpf_skb_change_protocol(skb, ETH_P_IPV6);
|
||||
else if (skb->protocol == htons(ETH_P_IPV6) &&
|
||||
flags & BPF_F_ADJ_ROOM_ENCAP_L3_IPV4)
|
||||
skb->protocol = htons(ETH_P_IP);
|
||||
bpf_skb_change_protocol(skb, ETH_P_IP);
|
||||
}
|
||||
|
||||
if (skb_is_gso(skb)) {
|
||||
@ -3606,10 +3613,10 @@ static int bpf_skb_net_shrink(struct sk_buff *skb, u32 off, u32 len_diff,
|
||||
/* Match skb->protocol to new outer l3 protocol */
|
||||
if (skb->protocol == htons(ETH_P_IP) &&
|
||||
flags & BPF_F_ADJ_ROOM_DECAP_L3_IPV6)
|
||||
skb->protocol = htons(ETH_P_IPV6);
|
||||
bpf_skb_change_protocol(skb, ETH_P_IPV6);
|
||||
else if (skb->protocol == htons(ETH_P_IPV6) &&
|
||||
flags & BPF_F_ADJ_ROOM_DECAP_L3_IPV4)
|
||||
skb->protocol = htons(ETH_P_IP);
|
||||
bpf_skb_change_protocol(skb, ETH_P_IP);
|
||||
|
||||
if (skb_is_gso(skb)) {
|
||||
struct skb_shared_info *shinfo = skb_shinfo(skb);
|
||||
|
@ -1083,7 +1083,8 @@ static noinline_for_stack int ethtool_set_rxnfc(struct net_device *dev,
|
||||
ethtool_get_flow_spec_ring(info.fs.ring_cookie))
|
||||
return -EINVAL;
|
||||
|
||||
if (!xa_load(&dev->ethtool->rss_ctx, info.rss_context))
|
||||
if (info.rss_context &&
|
||||
!xa_load(&dev->ethtool->rss_ctx, info.rss_context))
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
110
net/ipv6/route.c
110
net/ipv6/route.c
@ -3737,6 +3737,53 @@ void fib6_nh_release_dsts(struct fib6_nh *fib6_nh)
|
||||
}
|
||||
}
|
||||
|
||||
static int fib6_config_validate(struct fib6_config *cfg,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
/* RTF_PCPU is an internal flag; can not be set by userspace */
|
||||
if (cfg->fc_flags & RTF_PCPU) {
|
||||
NL_SET_ERR_MSG(extack, "Userspace can not set RTF_PCPU");
|
||||
goto errout;
|
||||
}
|
||||
|
||||
/* RTF_CACHE is an internal flag; can not be set by userspace */
|
||||
if (cfg->fc_flags & RTF_CACHE) {
|
||||
NL_SET_ERR_MSG(extack, "Userspace can not set RTF_CACHE");
|
||||
goto errout;
|
||||
}
|
||||
|
||||
if (cfg->fc_type > RTN_MAX) {
|
||||
NL_SET_ERR_MSG(extack, "Invalid route type");
|
||||
goto errout;
|
||||
}
|
||||
|
||||
if (cfg->fc_dst_len > 128) {
|
||||
NL_SET_ERR_MSG(extack, "Invalid prefix length");
|
||||
goto errout;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_IPV6_SUBTREES
|
||||
if (cfg->fc_src_len > 128) {
|
||||
NL_SET_ERR_MSG(extack, "Invalid source address length");
|
||||
goto errout;
|
||||
}
|
||||
|
||||
if (cfg->fc_nh_id && cfg->fc_src_len) {
|
||||
NL_SET_ERR_MSG(extack, "Nexthops can not be used with source routing");
|
||||
goto errout;
|
||||
}
|
||||
#else
|
||||
if (cfg->fc_src_len) {
|
||||
NL_SET_ERR_MSG(extack,
|
||||
"Specifying source address requires IPV6_SUBTREES to be enabled");
|
||||
goto errout;
|
||||
}
|
||||
#endif
|
||||
return 0;
|
||||
errout:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static struct fib6_info *ip6_route_info_create(struct fib6_config *cfg,
|
||||
gfp_t gfp_flags,
|
||||
struct netlink_ext_ack *extack)
|
||||
@ -3886,6 +3933,10 @@ int ip6_route_add(struct fib6_config *cfg, gfp_t gfp_flags,
|
||||
struct fib6_info *rt;
|
||||
int err;
|
||||
|
||||
err = fib6_config_validate(cfg, extack);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
rt = ip6_route_info_create(cfg, gfp_flags, extack);
|
||||
if (IS_ERR(rt))
|
||||
return PTR_ERR(rt);
|
||||
@ -4479,53 +4530,6 @@ void rt6_purge_dflt_routers(struct net *net)
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
static int fib6_config_validate(struct fib6_config *cfg,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
/* RTF_PCPU is an internal flag; can not be set by userspace */
|
||||
if (cfg->fc_flags & RTF_PCPU) {
|
||||
NL_SET_ERR_MSG(extack, "Userspace can not set RTF_PCPU");
|
||||
goto errout;
|
||||
}
|
||||
|
||||
/* RTF_CACHE is an internal flag; can not be set by userspace */
|
||||
if (cfg->fc_flags & RTF_CACHE) {
|
||||
NL_SET_ERR_MSG(extack, "Userspace can not set RTF_CACHE");
|
||||
goto errout;
|
||||
}
|
||||
|
||||
if (cfg->fc_type > RTN_MAX) {
|
||||
NL_SET_ERR_MSG(extack, "Invalid route type");
|
||||
goto errout;
|
||||
}
|
||||
|
||||
if (cfg->fc_dst_len > 128) {
|
||||
NL_SET_ERR_MSG(extack, "Invalid prefix length");
|
||||
goto errout;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_IPV6_SUBTREES
|
||||
if (cfg->fc_src_len > 128) {
|
||||
NL_SET_ERR_MSG(extack, "Invalid source address length");
|
||||
goto errout;
|
||||
}
|
||||
|
||||
if (cfg->fc_nh_id && cfg->fc_src_len) {
|
||||
NL_SET_ERR_MSG(extack, "Nexthops can not be used with source routing");
|
||||
goto errout;
|
||||
}
|
||||
#else
|
||||
if (cfg->fc_src_len) {
|
||||
NL_SET_ERR_MSG(extack,
|
||||
"Specifying source address requires IPV6_SUBTREES to be enabled");
|
||||
goto errout;
|
||||
}
|
||||
#endif
|
||||
return 0;
|
||||
errout:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static void rtmsg_to_fib6_config(struct net *net,
|
||||
struct in6_rtmsg *rtmsg,
|
||||
struct fib6_config *cfg)
|
||||
@ -4563,10 +4567,6 @@ int ipv6_route_ioctl(struct net *net, unsigned int cmd, struct in6_rtmsg *rtmsg)
|
||||
|
||||
switch (cmd) {
|
||||
case SIOCADDRT:
|
||||
err = fib6_config_validate(&cfg, NULL);
|
||||
if (err)
|
||||
break;
|
||||
|
||||
/* Only do the default setting of fc_metric in route adding */
|
||||
if (cfg.fc_metric == 0)
|
||||
cfg.fc_metric = IP6_RT_PRIO_USER;
|
||||
@ -5402,6 +5402,10 @@ static int ip6_route_multipath_add(struct fib6_config *cfg,
|
||||
int nhn = 0;
|
||||
int err;
|
||||
|
||||
err = fib6_config_validate(cfg, extack);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
replace = (cfg->fc_nlinfo.nlh &&
|
||||
(cfg->fc_nlinfo.nlh->nlmsg_flags & NLM_F_REPLACE));
|
||||
|
||||
@ -5636,10 +5640,6 @@ static int inet6_rtm_newroute(struct sk_buff *skb, struct nlmsghdr *nlh,
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
err = fib6_config_validate(&cfg, extack);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
if (cfg.fc_metric == 0)
|
||||
cfg.fc_metric = IP6_RT_PRIO_USER;
|
||||
|
||||
|
@ -661,7 +661,7 @@ static int ets_qdisc_change(struct Qdisc *sch, struct nlattr *opt,
|
||||
for (i = q->nbands; i < oldbands; i++) {
|
||||
if (i >= q->nstrict && q->classes[i].qdisc->q.qlen)
|
||||
list_del_init(&q->classes[i].alist);
|
||||
qdisc_tree_flush_backlog(q->classes[i].qdisc);
|
||||
qdisc_purge_queue(q->classes[i].qdisc);
|
||||
}
|
||||
WRITE_ONCE(q->nstrict, nstrict);
|
||||
memcpy(q->prio2band, priomap, sizeof(priomap));
|
||||
|
@ -211,7 +211,7 @@ static int prio_tune(struct Qdisc *sch, struct nlattr *opt,
|
||||
memcpy(q->prio2band, qopt->priomap, TC_PRIO_MAX+1);
|
||||
|
||||
for (i = q->bands; i < oldbands; i++)
|
||||
qdisc_tree_flush_backlog(q->queues[i]);
|
||||
qdisc_purge_queue(q->queues[i]);
|
||||
|
||||
for (i = oldbands; i < q->bands; i++) {
|
||||
q->queues[i] = queues[i];
|
||||
|
@ -285,7 +285,7 @@ static int __red_change(struct Qdisc *sch, struct nlattr **tb,
|
||||
q->userbits = userbits;
|
||||
q->limit = ctl->limit;
|
||||
if (child) {
|
||||
qdisc_tree_flush_backlog(q->qdisc);
|
||||
qdisc_purge_queue(q->qdisc);
|
||||
old_child = q->qdisc;
|
||||
q->qdisc = child;
|
||||
}
|
||||
|
@ -310,7 +310,10 @@ drop:
|
||||
/* It is difficult to believe, but ALL THE SLOTS HAVE LENGTH 1. */
|
||||
x = q->tail->next;
|
||||
slot = &q->slots[x];
|
||||
q->tail->next = slot->next;
|
||||
if (slot->next == x)
|
||||
q->tail = NULL; /* no more active slots */
|
||||
else
|
||||
q->tail->next = slot->next;
|
||||
q->ht[slot->hash] = SFQ_EMPTY_SLOT;
|
||||
goto drop;
|
||||
}
|
||||
@ -653,6 +656,14 @@ static int sfq_change(struct Qdisc *sch, struct nlattr *opt,
|
||||
NL_SET_ERR_MSG_MOD(extack, "invalid quantum");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (ctl->perturb_period < 0 ||
|
||||
ctl->perturb_period > INT_MAX / HZ) {
|
||||
NL_SET_ERR_MSG_MOD(extack, "invalid perturb period");
|
||||
return -EINVAL;
|
||||
}
|
||||
perturb_period = ctl->perturb_period * HZ;
|
||||
|
||||
if (ctl_v1 && !red_check_params(ctl_v1->qth_min, ctl_v1->qth_max,
|
||||
ctl_v1->Wlog, ctl_v1->Scell_log, NULL))
|
||||
return -EINVAL;
|
||||
@ -669,14 +680,12 @@ static int sfq_change(struct Qdisc *sch, struct nlattr *opt,
|
||||
headdrop = q->headdrop;
|
||||
maxdepth = q->maxdepth;
|
||||
maxflows = q->maxflows;
|
||||
perturb_period = q->perturb_period;
|
||||
quantum = q->quantum;
|
||||
flags = q->flags;
|
||||
|
||||
/* update and validate configuration */
|
||||
if (ctl->quantum)
|
||||
quantum = ctl->quantum;
|
||||
perturb_period = ctl->perturb_period * HZ;
|
||||
if (ctl->flows)
|
||||
maxflows = min_t(u32, ctl->flows, SFQ_MAX_FLOWS);
|
||||
if (ctl->divisor) {
|
||||
|
@ -452,7 +452,7 @@ static int tbf_change(struct Qdisc *sch, struct nlattr *opt,
|
||||
|
||||
sch_tree_lock(sch);
|
||||
if (child) {
|
||||
qdisc_tree_flush_backlog(q->qdisc);
|
||||
qdisc_purge_queue(q->qdisc);
|
||||
old = q->qdisc;
|
||||
q->qdisc = child;
|
||||
}
|
||||
|
@ -1971,7 +1971,8 @@ static void unix_maybe_add_creds(struct sk_buff *skb, const struct sock *sk,
|
||||
if (UNIXCB(skb).pid)
|
||||
return;
|
||||
|
||||
if (unix_may_passcred(sk) || unix_may_passcred(other)) {
|
||||
if (unix_may_passcred(sk) || unix_may_passcred(other) ||
|
||||
!other->sk_socket) {
|
||||
UNIXCB(skb).pid = get_pid(task_tgid(current));
|
||||
current_uid_gid(&UNIXCB(skb).uid, &UNIXCB(skb).gid);
|
||||
}
|
||||
|
@ -1583,7 +1583,7 @@ nl80211_parse_connkeys(struct cfg80211_registered_device *rdev,
|
||||
|
||||
return result;
|
||||
error:
|
||||
kfree(result);
|
||||
kfree_sensitive(result);
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
|
||||
|
@ -747,6 +747,62 @@ def test_rss_ntuple_addition(cfg):
|
||||
'noise' : (0,) })
|
||||
|
||||
|
||||
def test_rss_default_context_rule(cfg):
|
||||
"""
|
||||
Allocate a port, direct this port to context 0, then create a new RSS
|
||||
context and steer all TCP traffic to it (context 1). Verify that:
|
||||
* Traffic to the specific port continues to use queues of the main
|
||||
context (0/1).
|
||||
* Traffic to any other TCP port is redirected to the new context
|
||||
(queues 2/3).
|
||||
"""
|
||||
|
||||
require_ntuple(cfg)
|
||||
|
||||
queue_cnt = len(_get_rx_cnts(cfg))
|
||||
if queue_cnt < 4:
|
||||
try:
|
||||
ksft_pr(f"Increasing queue count {queue_cnt} -> 4")
|
||||
ethtool(f"-L {cfg.ifname} combined 4")
|
||||
defer(ethtool, f"-L {cfg.ifname} combined {queue_cnt}")
|
||||
except Exception as exc:
|
||||
raise KsftSkipEx("Not enough queues for the test") from exc
|
||||
|
||||
# Use queues 0 and 1 for the main context
|
||||
ethtool(f"-X {cfg.ifname} equal 2")
|
||||
defer(ethtool, f"-X {cfg.ifname} default")
|
||||
|
||||
# Create a new RSS context that uses queues 2 and 3
|
||||
ctx_id = ethtool_create(cfg, "-X", "context new start 2 equal 2")
|
||||
defer(ethtool, f"-X {cfg.ifname} context {ctx_id} delete")
|
||||
|
||||
# Generic low-priority rule: redirect all TCP traffic to the new context.
|
||||
# Give it an explicit higher location number (lower priority).
|
||||
flow_generic = f"flow-type tcp{cfg.addr_ipver} dst-ip {cfg.addr} context {ctx_id} loc 1"
|
||||
ethtool(f"-N {cfg.ifname} {flow_generic}")
|
||||
defer(ethtool, f"-N {cfg.ifname} delete 1")
|
||||
|
||||
# Specific high-priority rule for a random port that should stay on context 0.
|
||||
# Assign loc 0 so it is evaluated before the generic rule.
|
||||
port_main = rand_port()
|
||||
flow_main = f"flow-type tcp{cfg.addr_ipver} dst-ip {cfg.addr} dst-port {port_main} context 0 loc 0"
|
||||
ethtool(f"-N {cfg.ifname} {flow_main}")
|
||||
defer(ethtool, f"-N {cfg.ifname} delete 0")
|
||||
|
||||
_ntuple_rule_check(cfg, 1, ctx_id)
|
||||
|
||||
# Verify that traffic matching the specific rule still goes to queues 0/1
|
||||
_send_traffic_check(cfg, port_main, "context 0",
|
||||
{ 'target': (0, 1),
|
||||
'empty' : (2, 3) })
|
||||
|
||||
# And that traffic for any other port is steered to the new context
|
||||
port_other = rand_port()
|
||||
_send_traffic_check(cfg, port_other, f"context {ctx_id}",
|
||||
{ 'target': (2, 3),
|
||||
'noise' : (0, 1) })
|
||||
|
||||
|
||||
def main() -> None:
|
||||
with NetDrvEpEnv(__file__, nsim_test=False) as cfg:
|
||||
cfg.context_cnt = None
|
||||
@ -760,7 +816,8 @@ def main() -> None:
|
||||
test_rss_context_overlap, test_rss_context_overlap2,
|
||||
test_rss_context_out_of_order, test_rss_context4_create_with_cfg,
|
||||
test_flow_add_context_missing,
|
||||
test_delete_rss_context_busy, test_rss_ntuple_addition],
|
||||
test_delete_rss_context_busy, test_rss_ntuple_addition,
|
||||
test_rss_default_context_rule],
|
||||
args=(cfg, ))
|
||||
ksft_exit()
|
||||
|
||||
|
@ -27,6 +27,7 @@ TEST_PROGS += amt.sh
|
||||
TEST_PROGS += unicast_extensions.sh
|
||||
TEST_PROGS += udpgro_fwd.sh
|
||||
TEST_PROGS += udpgro_frglist.sh
|
||||
TEST_PROGS += nat6to4.sh
|
||||
TEST_PROGS += veth.sh
|
||||
TEST_PROGS += ioam6.sh
|
||||
TEST_PROGS += gro.sh
|
||||
|
15
tools/testing/selftests/net/nat6to4.sh
Executable file
15
tools/testing/selftests/net/nat6to4.sh
Executable file
@ -0,0 +1,15 @@
|
||||
#!/bin/bash
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
NS="ns-peer-$(mktemp -u XXXXXX)"
|
||||
|
||||
ip netns add "${NS}"
|
||||
ip -netns "${NS}" link set lo up
|
||||
ip -netns "${NS}" route add default via 127.0.0.2 dev lo
|
||||
|
||||
tc -n "${NS}" qdisc add dev lo ingress
|
||||
tc -n "${NS}" filter add dev lo ingress prio 4 protocol ip \
|
||||
bpf object-file nat6to4.bpf.o section schedcls/egress4/snat4 direct-action
|
||||
|
||||
ip netns exec "${NS}" \
|
||||
bash -c 'echo 012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789abc | socat - UDP4-DATAGRAM:224.1.0.1:6666,ip-multicast-loop=1'
|
Loading…
Reference in New Issue
Block a user