串行监视器显示来自Arduino Mega的意外输入



我正在使用Arduino Mega来控制CS1237 ADC。根据我找到的数据表(通过https://github.com/SiBangkotan/CS1237-ADC-cpp-library)。这似乎在一定程度上起作用,因为当我为接收到的每个比特和得到的数据字执行Serial.println()时,我会得到一个24比特的数据字,它与我得到的24个单独的比特相匹配。然而,当我去掉Serial.println()的额外调试用途,即在接收到每个比特时打印它们时,我也会得到不同的数据字。它每次是所有1的20位,而不是各种1和0的24位。我不明白为什么串行通信通道中的额外调试输出应该更改进入串行监视器的数据字?

这是我的设置和预设置代码:

// Using pins 2 and 3 on the Arduino, since 0 and 1 are used to talk to the USB port.
int ndrdy = 2;
int clck = 3;
// the setup routine runs once when you press reset:
void setup() {
// initialize serial communication at 9600 bits per second:
Serial.begin(9600);
while (!Serial) {
; // wait for serial port to connect. Needed for native USB port only
}
// make the drdy's pin an input and clock an output:
pinMode(ndrdy, INPUT);
}

这是相关代码:

void loop() {
// Hacky way of waiting for the signal that !DRDY is ready.
while(digitalRead(ndrdy) == LOW) {
// do nothing until pin pulls high.
}
while(digitalRead(ndrdy) == HIGH) {
// keep doing nothing until pin goes low again.
}
// now data is ready, we can read
long dataword = 0;
for(int i = 0; i < 24; i++) {
digitalWrite(clck, HIGH);
delayMicroseconds(1);
digitalWrite(clck, LOW);
int new_bit = digitalRead(ndrdy);
dataword <<= 1;       // shift everything one place to the left
dataword |= new_bit;  // add the new bit to the newly empty place
}
// There's a total of 27 bits but we don't care about the last 3.
// Write HIGH 3 times to flush it out.
for (int i = 0; i < 3; i++) {
digitalWrite(clck, HIGH);
delayMicroseconds(1);
digitalWrite(clck, LOW);
}
// Send out the data to the USB serial out:
Serial.println(dataword, BIN);
}

串行监视器的输出是

13:44:45.685 -> 11111111111111111111
13:44:45.685 -> 11111111111111111111
13:44:45.718 -> 11111111111111111111
13:44:45.751 -> 11111111111111111111
13:44:45.751 -> 11111111111111111111
13:44:45.785 -> 11111111111111111111
13:44:45.818 -> 111111111111111111111
13:44:45.852 -> 11111111111111111111
13:44:45.852 -> 11111111111111111111
13:44:45.885 -> 11111111111111111111
13:44:45.918 -> 111111111111111111111
13:44:45.918 -> 11111111111111111111
13:44:45.951 -> 11111111111111111111

等等然而,当我在for(int i = 0; i < 24; i++)循环的右括号之前添加一个额外的Serial.println(new_bit);时,我会在Arduino IDE的串行监视器中得到这样的输出(显示为打开了时间戳(:

14:41:19.992 -> 0
14:41:19.992 -> 1
14:41:19.992 -> 1
14:41:19.992 -> 1
14:41:19.992 -> 1
14:41:19.992 -> 1
14:41:19.992 -> 1
14:41:19.992 -> 1
14:41:19.992 -> 1
14:41:19.992 -> 0
14:41:19.992 -> 1
14:41:20.025 -> 1
14:41:20.025 -> 1
14:41:20.025 -> 1
14:41:20.025 -> 1
14:41:20.025 -> 0
14:41:20.025 -> 0
14:41:20.025 -> 1
14:41:20.025 -> 1
14:41:20.025 -> 1
14:41:20.025 -> 1
14:41:20.025 -> 1
14:41:20.058 -> 0
14:41:20.058 -> 1
14:41:20.058 -> 11111111011111001111101
14:41:20.091 -> 0
14:41:20.091 -> 1
14:41:20.091 -> 1
14:41:20.091 -> 1
14:41:20.091 -> 1
14:41:20.091 -> 1
14:41:20.091 -> 1
14:41:20.091 -> 1
14:41:20.091 -> 1
14:41:20.091 -> 0
14:41:20.125 -> 1
14:41:20.125 -> 1
14:41:20.125 -> 1
14:41:20.125 -> 1
14:41:20.125 -> 1
14:41:20.125 -> 0
14:41:20.125 -> 0
14:41:20.125 -> 1
14:41:20.125 -> 1
14:41:20.125 -> 1
14:41:20.125 -> 1
14:41:20.158 -> 1
14:41:20.158 -> 0
14:41:20.158 -> 1
14:41:20.158 -> 11111111011111001111101

如果我在Serial.println()上执行除new_bit之外的任何操作,例如,如果我执行Serial.println(dataword);,或者如果我引入一个小延迟而不是执行串行打印,则不会发生这种情况。在这种情况下,它仍然进行21的输出。我不知道串行通信出了什么问题,因为从ADC读取数据似乎正常。如果我引入5000us或更多的延迟,那么dataword的内容就会发生变化,这似乎会成为延迟长度的函数。即,dataword的内容对于每个延迟长度是恒定的(5000us、6000us、10000us和20000us是我尝试的(。如果延迟足够长,它将返回到全1。

查看数据表。。。

首先当芯片启动时。。。默认情况下,输入Al引脚。你没有设置你的时钟引脚模式,所以你没有时钟。此外,ADC可能需要长达300毫秒才能唤醒。这是引导序列的一部分,当您退出setup((时,芯片应该已经准备好了。您也可以在setup((中包含任何ADC寄存器的设置。参见图5和图6中的数据表启动顺序。

此外,如果你想尝试更低的时钟速度,不要让clck高达100us

根据数据表,2.5:"当SCLK从低电平变为高电平并保持高电平超过100μs时,CS1237进入PowerDown模式,功耗小于0.1μA。当SCLK回到低电平时,芯片将恢复正常工作">

void setup() 
{
// initialize serial communication at 9600 bits per second:
Serial.begin(9600);
while (!Serial) {}

// make the drdy's pin an input and clock an output:
// remove pullup on ndrdy
digitalWrite(ndrdy, LOW);
pinMode(ndrdy, INPUT);
digitalWrite(clck, LOW);
pinMode(clck, OUTPUT);
// wait for ADC to end its own boot sequence.
while (digitalRead(ndrdy)) {}
while (!digitalRead(ndrdy)) {}
}

图表";图7";数据表上写着:

等待/DRDY为低,等待持续时间t4(为0(,因此没有等待是可以的,然后循环每个位:

  • 将时钟设置为高
  • 至少等待持续时间t6(455ns(
  • 读取输入位
  • 将时钟设置为低
  • 时钟必须在下一个时钟之前保持低电平至少持续时间t5(455ns(

您可以在时钟较低时读取数据位,但请注意,在图8的数据库中,一旦时钟位变低,第27位就会无效。根据经验,这暗示着你们应该在时钟高的时候阅读。有些数据表很难阅读,有些甚至是错误的。这就是我对这篇文章的理解,但我可能错了,你们可能也想在时钟很高的时候尝试阅读。

然后您的输入程序变为:

// reads a 24 bit value from ADC, returns -1 if no data to read 
// note that this function does not wait, so your other processing 
// can still be responsive. 
long readADC() 
{
// check if data is ready. 
if (digitalRead(ndrdy))
return -1;    
long result = 0;
// read 24 bits.
for (int i = 0; i < 24; i++) 
{
// get ADC to output a bit.
digitalWrite(clck, HIGH);
delayMicroseconds(1);      
// read it
int new_bit = digitalRead(ndrdy);
digitalWrite(clck, LOW);
delayMicroseconds(1);      // this delay could be shorter, because of 
// operations immediately taking some
// time...  You may want to time it
// using a scope, at least for the fun
// of it.  On a slow 8-bit ATMega, it may not
// be needed, there are move than 16 cycles
// of processing below. plus 2 cycles for
// jumping back to top of loop.
// IS needed for sure at clock speeds above
// 16 MHz.
result <<= 1;
result |= new_bit;
}
// emit 3 more clock cycles.
for (int i = 0; i < 3; i++) 
{
digitalWrite(clck, HIGH);
delayMicroseconds(1);      
digitalWrite(clck, LOW);
delayMicroseconds(1);
}
// note that the 27th clock cycle has set /DRDY high.
// There is never any need to wait on /DRDY going high.
return result;  // mask unwanted bits.
}

void loop()
{
// ...
long adcValue = readADC();
if (adcValue >= 0)
{
// process ADC input
Serial.print("ADC reading: ");
Serial.print(adcValue);
Serial.print(" (");
Serial.print(adcValue, BIN);
Serial.println(")");
}
// ...
}

一旦你顺利运行,你可以尝试通过自己的455ns延迟功能,使用无操作,使阅读速度更快

#define NOOP() __asm__("nopnt")  // 1 operation cycle delay, for 8-bit ATMega, 
// 1 op cycle == 1 clock cycle.

实际延迟将取决于您的时钟速度。通常,这些都是使用宏来实现的。

例如,在多行宏中。注意行末尾的反斜杠。这些应该是行的最后一个字符,并且宏中不应该有任何空行

// 500 ns delay @ 16MHz clock, on an 8-bit ATMega.
#define NOOP() __asm__("nopnt")
#define DELAY_500ns()   NOOP(); NOOP(); NOOP(); NOOP();  
NOOP(); NOOP(); NOOP(); NOOP(); 

最新更新