I've created a simple test to help me work through bandwidth issues. I have a TCP server running on a windows box. The CC3000 connects to the server and requests a file, the server sends a 250K byte file. I test to see how fast this goes.
When I look in wireshark capture, the time between the windows box sending the data and the CC3000 ACK always has a 5ms delay! Is that normal? I believe from what I've read, that this is a limitation of the CC3000.
Then like clockwork, every 10th packet is dropped, resulting in the server re-transmitting the packet after a 300ms delay in not receiving the ACK. (This is where my real problem exists). It is every 10 packets, repeating throughout the entire duration of the transfer. This happens every time I run my test. I even tried running the test to a different computer and got the exact same results.
Here is the wireshark capture of a transfer:
http://www.tjscreed.com/5ms_delay.pcapng
I'll post my client code below. As you can see in my client code, I've set SOCKOPT_RECV_NONBLOCK on the socket so that the call does not block. I've added a counter to ensure the code is really nonblocking. I see nonblockcnt incremented about 100 times for each successful call to recv that returns bytes.
Why does the CC3000 drop every 10th packet? Have I missed something somewhere?
// Connect Socket
ulClientSocket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
lerr = connect(ulClientSocket, &tSocketAddr, sizeof(tSocketAddr));
if(lerr != ESUCCESS)
{
print("Error Connecting\r\n");
return;
}
SentLen = send(ulClientSocket, req, strlen(req), 0);
long optvalue_block = 0;
if (setsockopt(ulClientSocket, SOL_SOCKET, SOCKOPT_RECV_NONBLOCK, &optvalue_block, sizeof(optvalue_block)) != 0)
{
print("Error setting RECV_NONBLOCK\r\n");
}
do
{
iBytesReceived = recv(ulClientSocket, rxbuf, RX_BUFFER_SIZE, 0);
if (iBytesReceived > 0)
{
cnt+=iBytesReceived;
} else {
nonblockcnt++;
}
} while (cnt != totalbytes);