* change the HT_RC_2_MCS to do MCS0..23
* Use it when looking up the ht20/ht40 array for bits-per-symbol
* add a clk_to_psec (picoseconds) routine, so we can get sub-microsecond
accuracy for the math
* .. and make that + clk_to_usec public, so higher layer code that is
returning clocks (eg the ANI diag routines, some upcoming locationing
experiments) can be converted to microseconds.
Whilst here, add a comment in ar5416 so i or someone else can revisit the
latency values.
Uses of commas instead of a semicolons can easily go undetected. The comma
can serve as a statement separator but this shouldn't be abused when
statements are meant to be standalone.
Detected with devel/coccinelle following a hint from DragonFlyBSD.
MFC after: 1 month
* the code already stored the length of the RX desc, which I never used.
So, use that and retire the new flag I introduced a while ago.
* Introduce a TX timestamp length field and capability.
Right now the only way to force a cold reset is:
* The HAL itself detects it's needed, or
* The sysctl, setting all resets to be cold.
Trouble is, cold resets take quite a bit longer than warm resets.
However, there are situations where a cold reset would be nice.
Specifically, after a stuck beacon, BB/MAC hang, stuck calibration results,
etc.
The vendor HAL has a separate method to set the reset reason (which is
how HAL_RESET_BBPANIC gets set) which informs the HAL during the reset path
why it occured. This is almost but not quite the same; I may eventually
unify both approaches in the future.
This commit just extends HAL_RESET_TYPE to include both status (eg BBPANIC)
and type (eg do COLD.) None of the HAL code uses it yet though; that'll
come later.
It also is a big no-op in each HAL - I need to go teach each of the HALs
about cold/warm reset through this path.
This was off because the net80211 aggregation code was using the same
state pointers for both fast frames and ampdu tx support which led to some
pretty unfortunate panic-y behaviour.
Now that net80211 doesn't panic, let's flip this back on.
It doesn't (yet) do the horrific sounding thing of A-MPDU aggregates
of fast frames; that'll come next. It's a pre-requisite to supporting
AMSDU + AMPDU anyway, which actually speeds things up quite considerably
(think packing lots of little ACK frames into a single AMSDU.)
Tested:
* QCA955x SoC, AP mode
* AR5416, STA mode
* AR9170, STA mode (with local fast frame patches)
rathe than private in each HAL module.
Whilst here, modify ath_hal_private to always have the per-channel
noisefloor stats, rather than conditionally. This just makes
life easier in general (no strange ABI differences between different
HAL compile options.)
Add a couple of methods (clear/reset, add) rather than using
hand-rolled versions of things.
in prep for the next NF calibration pass.
Totally missing braces. Damn you C.
Submitted by: Sascha Wildner <swildner@dragonflybsd.org>
MFC after: 1 week
These variants have a few differences from the default AR9485 NIC,
namely:
* a non-default antenna switch config;
* slightly different RX gain table setup;
* an external XLNA hooked up to a GPIO pin;
* (and not yet done) RSSI threshold differences when
doing slow diversity.
To make this possible:
* Add the PCI device list from Linux ath9k, complete with vendor and
sub-vendor IDs for various things to be enabled;
* .. and until FreeBSD learns about a PCI device list like this,
write a search function inspired by the USB device enumeration code;
* add HAL_OPS_CONFIG to the HAL attach methods; the HAL can use this
to initialise its local driver parameters upon attach;
* copy these parameters over in the AR9300 HAL;
* don't default to override the antenna switch - only do it for
the chips that require it;
* I brought over ar9300_attenuation_apply() from ath9k which is cleaner
and easier to read for this particular NIC.
This is a work in progress. I'm worried that there's some post-AR9380
NIC out there which doesn't work without the antenna override set as
I currently haven't implemented bluetooth coexistence for the AR9380
and later HAL. But I'd rather have this code in the tree and fix it
up before 11.0-RELEASE happens versus having a set of newer NICs
in laptops be effectively RX deaf.
Tested:
* AR9380 (STA)
* AR9485 CUS198 (STA)
Obtained from: Qualcomm Atheros, Linux ath9k
Some code will appear soon that is actually setting the chip powerstate
separate from the self-generated frames power state.
* Allow the AR5416 family chips to actually have the power state changed
from the self generated state change.
Tested (STA mode):
* AR5210
* AR5211
* AR5412
* AR5413
* AR5416
* AR9285
the QCA HAL.
This fires off an interrupt if the TSF from the AP / IBSS peer is
wildly out of range. I'll add some code to the ath(4) driver soon
which makes use of this.
TODO:
* verify this didn't break TDMA!
to the hardware.
The QCA HAL has a comment noting that if this isn't done, modifications
to AR_IMR_S2 before AR_IMR is flushed may produce spurious interrupts.
Obtained from: QCA
This way the state changes from sleep->awake before the registers are poked
and from awake->sleep after the registers are poked.
This way spurious warnings aren't printed by my (to be committed)
debugging code.
Tested:
* AR5416, STA
private per-chip HAL.
This allows the ah_osdep.[ch] code to check whether the power state is
valid for doing chip programming.
It should be a no-op for normal driver work but it does require a
clean kernel/module rebuild, as the size of HAL structures have changed.
Now, this doesn't track whether the hardware is ACTUALLY awake,
as NETWORK_SLEEP wakes the chip up for a short period when traffic
is received. This doesn't actually set the power mode to AWAKE, so
we have to be careful about how we touch things.
But it's enough to start down the path of implementing station mode
chipset power savings, as a large part of the silliness is making
sure the chip is awake during periodic calibration / ANI and
random places where transmit may be occuring. I'd rather not a repeat
of debugging power save on ath9k, where races with calibration
and transmit path stuff took a couple years to shake out.
Tested:
* AR5416, STA mode
* Call the bluetooth setup function during the reset path, so the bluetooth
settings are actually initialised.
* Call the AR9285 diversity functions during bluetooth setup; so the AR9285
diversity and antenna configuration registers are correctly programmed
* Misc debugging info.
Tested:
* AR9285+AR3011 bluetooth combo; this code itself doesn't enable bluetooth
coexistence but it's part of what I'm currently using.
The main problem here is that fast and driver RX diversity isn't actually
configured; I need to figure out why that is. That said, this makes
the single-antenna connected AR9285 and AR2427 (AR9285 w/ no 11n) work
correctly.
PR: kern/179269
* Add ah_ratesArray[] to the ar5416 HAL state - this stores the maximum
values permissable per rate.
* Since different chip EEPROM formats store this value in a different place,
store the HT40 power detector increment value in the ar5416 HAL state.
* Modify the target power setup code to store the maximum values in the
ar5416 HAL state rather than using a local variable.
* Add ar5416RateToRateTable() - to convert a hardware rate code to the
ratesArray enum / index.
* Add ar5416GetTxRatePower() - which goes through the gymnastics required
to correctly calculate the target TX power:
+ Add the power detector increment for ht40;
+ Take the power offset into account for AR9280 and later;
+ Offset the TX power correctly when doing open-loop TX power control;
+ Enforce the per-rate maximum value allowable.
Note - setting a TPC value of 0x0 in the TX descriptor on (at least)
the AR9160 resulted in the TX power being very high indeed. This didn't
happen on the AR9220. I'm guessing it's a chip bug that was fixed at
some point. So for now, just assume the AR5416/AR5418 and AR9130 are
also suspect and clamp the minimum value here at 1.
Tested:
* AR5416, AR9160, AR9220 hostap, verified using (2GHz) spectrum analyser
* Looked at target TX power in TX descriptor (using athalq) as well as TX
power on the spectrum analyser.
TODO:
* The TX descriptor code sets the target TX power to 0 for AR9285 chips.
I'm not yet sure why. Disable this for TPC and ensure that the TPC
TX power is set.
* AR9280, AR9285, AR9227, AR9287 testing!
* 5GHz testing!
Quirks:
* The per-packet TPC code is only exercised when the tpc sysctl is set
to 1. (dev.ath.X.tpc=1.) This needs to be done before you bring the
interface up.
* When TPC is enabled, setting the TX power doesn't end up with a call
through to the HAL to update the maximum TX power. So ensure that
you set the TPC sysctl before you bring the interface up and configure
a lower TX power or the hardware will be clamped by the lower TX
power (at least until the next channel change.)
Thanks to Qualcomm Atheros for all the hardware, and Sam Leffler for use
of his spectrum analyser to verify the TX channel power.
to stuck beacons.
* Set the cabq readytime (ie, how long to burst for) to 50% of the total
beacon interval time
* fix the cabq adjustment calculation based on how the beacon offset is
calculated (the SWBA/DBA time offset.)
This is all still a bit magic voodoo but it does seem to have further
quietened issues with missed/stuck beacons under my local testing.
In any case, it better matches what the reference HAL implements.
Obtained from: Qualcomm Atheros
* Remove ar5416UpdateChainmasks();
* Remove the TX chainmask override code from the ar5416 TX descriptor
setup routines;
* Write a driver method to calculate the current chainmask based on the
operating mode and update the driver state;
* Call the HAL chainmask method before calling ath_hal_reset();
* Use the currently configured chainmask in the TX descriptors rather than
the hardware TX chainmasks.
Tested:
* AR5416, STA/AP mode - legacy and 11n modes
Right now the only way to set the chainmask is to set the hardware
configured chainmask through capabilities. This is fine for forcing
the chainmask to be something other than what the hardware is capable
of (eg to reduce TX/RX to one connected antenna) but it does change what
the HAL hardware chainmask configuration is.
For operational mode changes, it (may?) make sense to separately control
the TX/RX chainmask.
Right now it's done as part of ar5416_reset.c - ar5416UpdateChainMasks()
calculates which TX/RX chainmasks to enable based on the operating mode.
(1 for legacy and whatever is supported for 11n operation.) But doing
this in the HAL is suboptimal - the driver needs to know the currently
configured chainmask in order to correctly enable things for each
TX descriptor. This is currently done by overriding the chainmask
config in the ar5416 TX routines but this has to disappear - the AR9300
HAL support requires the driver to dynamically set the TX chainmask based
on the TX power and TX rate in order to meet mini-PCIe slot power
requirements.
So:
* Introduce a new HAL method to set the operational chainmask variables;
* Introduce null methods for the previous generation chipsets;
* Add new driver state to record the current chainmask separate from
the hardware configured chainmask.
Part #2 of this will involve disabling ar5416UpdateChainMasks() and moving
it into the driver; as well as properly programming the TX chainmask
based on the currently configured HAL chainmask.
Tested:
* AR5416, STA mode - both legacy (11a/11bg) and 11n rates - verified
that AR_SELFGEN_MASK (the chainmask used for self-generated frames like
ACKs and RTSes) is correct, as well as the TX descriptor contents is
correct.
an incorrectly calculated RTS duration value when transmitting aggregates.
These earlier 802.11n NICs incorrectly used the ACK duration time when
calculating what to put in the RTS of an aggregate frame. Instead it
should have used the block-ack time. The result is that other stations
may not reserve enough time and start transmitting _over_ the top of
the in-progress blockack field. Tsk.
This workaround is to popuate the burst duration field with the delta
between the ACK duration the hardware is using and the required duration
for the block-ack. The result is that the RTS field should now contain
the correct duration for the subsequent block-ack.
This doesn't apply for AR9280 and later NICs.
Obtained from: Qualcomm Atheros
Specifically - never jack the TX FIFO threshold up to the absolute
maximum; always leave enough space for two DMA transactions to
appear.
This is a paranoia from the Linux ath9k driver. It can't hurt.
Obtained from: Linux ath9k
This has reduced the number of TX delimiter and data underruns when
doing large UDP transfers (>100mbit).
This stops any HAL_INT_TXURN interrupts from occuring, which is a good
sign!
Obtained from: Qualcomm Atheros
This includes the HAL routines to setup, enable/activate/disable spectral
scan and configure the relevant registers.
This still requires driver interaction to enable spectral scan reporting.
Specifically:
* call ah_spectralConfigure() to configure and enable spectral scan;
* .. there's currently no way to disable spectral scan... that will have
to follow.
* call ah_spectralStart() to force start a spectral report;
* call ah_spectralStop() to force stop an active spectral report.
The spectral scan results appear as PHY errors (type 0x5 on the AR9280,
same as radar) but with the spectral scan bit set (0x10 in the last byte
of the frame) identifying it as a spectral report rather than a radar
FFT report.
Caveats:
* It's likely quite difficult to run spectral _and_ radar at the same
time. Enabling spectral scan disables the radar thresholds but
leaves radar enabled. Thus, the driver (for now) needs to ensure
that only one or the other is enabled.
* .. it needs testing on HT40 mode.
Tested:
* AR9280 in STA mode, HT/20 only
TODO:
* Test on AR9285, AR9287;
* Test in both HT20 and HT40 modes;
* .. all the driver glue.
Obtained from: Qualcomm Atheros
enforcing the TXOP and TBTT limits:
* Frames which will overlap with TBTT will not TX;
* Frames which will exceed TXOP will be filtered.
This is not enabled by default; it's intended to be enabled by the
TDMA code on 802.11n capable chipsets.
After chatting with the MAC team, the TSF writes (at least on the 11n
MACs, I don't know about pre-11n MACs) are done as 64 bit writes that
can take some time. So, doing a 32 bit TSF write is definitely not
supported. Leave a comment here which explains that.
Whilst here, add a comment which outlines that after a reset or TSF
write, the TSF write may take a while (up to 50uS) to update.
A write or reset shouldn't be done whilst the previous one is in
flight. Also (and this isn't currently done) a read shouldn't
occur until the SLEEP32_TSF_WRITE_STAT is clear. Right now we're
not doing that, mostly because we haven't been doing lots of TSF
resets/writes until recently.