Quantcast
Channel: Forums - Recent Threads
Viewing all 262198 articles
Browse latest View live

CC3200: Cortex-M4 to NWP interface communication

$
0
0

I am developing Wi-Fi projects using CC3200 launchpad board.Can someone explain me what exactly happens on powering up(assuming a small Wi-Fi application) CC3200.

Where do Wi-Fi and peripheral drivers (resident in ROM) execute?

Where does the user application run? ( from external flash or internal RAM)

I would also like to get some details on the interaction between Cortex-M4 and NWP(network processor) to understand what latencies are involved when data transfer is initiated with a remote peer on the network.

Thanks


Is there any GPS examples on EVM6614?

$
0
0

Hi

Does TI provide any GPS examples on EVM6614 ?

My customer said that GPS_enable is high and he 1pps signal is high all the time. Is there any other thing my customer need to do to activate GPS module on board to make sure 1PPS is generated on board.

There is no serial data also coming out of module (GPS_TX pin of GPS1) when antenna is connected.

 

Regards,

YC

TM4C129ENCPDT Availability

$
0
0

Greetings,

  I wonder if someone at TI could let me know when will the TM4C129ENCPDT be available

thru TI´s distributor network, the page states the part is active but only one distributor in Europe

has it in stock (Avnet Silica). Will there be any samples of this part available in the short term?

Best Regards,

Haroldo Calvo

TPVison-DIT4096 Review

Getting Random values in ADC

$
0
0

 when I do programming the ADC in CC3200 I am getting junk values. I found random values displayed when I display values using HyperTerminal. Even I am facing the same issue with the example codes given in SDK. How should I proceed with this?

I assigned Pin 58 as ADC pin.

Adding to this, as like MSP430 I don’t have* register ADCMEM in CC3200.(*According to my Knowledge)

AM3505 SRAM interface

$
0
0

We need to interface SRAM to AM3505 processor. Most of the SRAMs have non-muxed address and data lines (i.e. separate address and data bus)

But number of address lines available on AM3505 are limited and can support maximum of 2KB.

We are planning to use latches (eg SN74ALVCH16374)  to support 256K Bytes of SRAM (as given in example interface with NOR flash here 

Can anyone please confirm, this is feasible or not.

Any reference designs available

We tried using PSRAM, but the idle mode power consumption is high and discharging the Coin cell batteries (used to make this RAM as NVRAM) rapidly.

Any alternate NVRAM options in the range of 256KB to 4MB (within $5 price tag)

AJSM TMS570LS0332 JTAG unlock tool for XDS200 from Spectrum Digital?

$
0
0

hi @all,

i'm working with TMS570LS0332 and have to use the AJSM - Advanced JTAG Security Module . But I was not able to found some detailed Information for this Topic. Only some old Threads in this forum how to lock the device. But the software tool to unlock is still not known by me.

I want to unlock the device with XDS200 from Spectrum Digital.

Can somebody help me how it works and where I get the unlock tool?

I'm using Code Composer Studio 6.0.1

Thanks,

Frank

66AK2H12: arm corepac's developing problems?

$
0
0

HI, experts:

hardware platform:66AK2H12

software paltform:CCS5.5

I am developing 66AK2H12  at present, and I had developed 6678 for sever years, I think the difference between 66AK2H12 and 6678 is that 66AK2H12 has A15 need to develop than 6678, but the developing of ARM A15 is difficulty for us, so I have some questiones to consult you:

1. Is the developing of ARM A15 must on the linux host,? from the wiki, I see this sentence:

  • Toolchain: Must be installed on Linux Host. Ubuntu 12.04 recommended.

2. DO ARM A15 must run on linux system? 

3. If ARM A15 rum linux, dsp core run SYS/BIOS, how do they communite? can you give me a suggests.

Best Regards

   Gavin


I can not select the [Use Alternate ADC "Trigger Option-B"] in HALCoGen.

$
0
0

Hi,

My HalCoGen version is 04.01.00.

In the case of TMS570LS1224PGE,

HALCoGen can not select the <Use Alternate ADC "Trigger Option-B">.

Does PGE device can not use this trigger ?

Regards,

FSSer

OOB getting stuck in vApplicationMallocFailedHook when configuring MQTT

$
0
0

I'm working on the Benjamin Cabe MQTT Demo and have run into a few issues. I've edited the OOB project accordingly. It builds, however I there is no traffic on the server. I go into the debug and notice that the program gets stuck in vApplicationMallocFailedHook, I'm a bit new to RTOS's so I'm a bit stuck at this point. Any help would be great!

Also, I've changed SSID_NAME, SECURITY_TYPE, and SECURITY_KEY common.h to be able to access my router.

edit: the MQTT demo can be found here, http://blog.benjamin-cabe.com/2014/08/26/mqtt-on-the-ti-cc3200-launchpad-thanks-to-paho-embedded-client

Breakpoint issue with shared memory SMP configuration

$
0
0

I'm using CCS 6 for the first time to load a SYS/BIOS SMP binary onto DRA7xx/IPU1. I get the following errors in the console window. This same program worked with CCS 5.5.

Cortex_M4_IPU1_C1: Trouble Setting Breakpoint with the Action "Process CIO" at 0x86012942: (Error -1067 @ 0x86012942) There is already a breakpoint at the requested address. This error may be caused by a shared memory SMP configuration. You may consider setting up shared memory in the memory map. (Emulation package 5.1.507.0)
Cortex_M4_IPU1_C1: Breakpoint Manager: Retrying with a AET breakpoint
Cortex_M4_IPU1_C1: Breakpoint Manager: Error enabling this function: Address exceeds the allowed range
Cortex_M4_IPU1_C1: Trouble Setting Breakpoint with the Action "Terminate Program Execution" at 0x86013a30: (Error -1067 @ 0x86013A30) There is already a breakpoint at the requested address. This error may be caused by a shared memory SMP configuration. You may consider setting up shared memory in the memory map. (Emulation package 5.1.507.0)
Cortex_M4_IPU1_C1: Breakpoint Manager: Retrying with a AET breakpoint
Cortex_M4_IPU1_C1: Breakpoint Manager: Error enabling this function: Address exceeds the allowed range

What should I be doing in order to avoid these errors?

Thanks
~Ramsey

 

DRV3201-Q1 Startup time of Boost converter

$
0
0

Hello,

Customer wants to know startup time of boost converter from B_EN changes to High.

I have already answer answered as follows.

“The time until the boost reaches it's final value is strongly depending on the input voltage, the coil, the boost capacitance, the load, the resistor on GNDLS_B and probably some other parameters. Best would be to read out the boost under voltage flag of STAT1 resistor and wait until it goes low.”

 

Please answer following additional questions related above requirement.

 

Q1: startup time of typical application

Please advise minimum and maximum startup time in case of “Figure 16. DRV3201-Q1 Typical Application Diagram” of datasheet (L=22uH, C=1uF, GNDLS_B resistor=330m ohm).

Customer wants to know how the startup time varies.

 

Q2: V BOOSTUV threshold

Please advise if V BOOSTUV threshold (11V min, 11.9V max) is same in both case of BOOST voltage is increasing and decreasing, or different.

Does it have Hysteresis?

 

Q3:

In case of V BOOSTUV doesn’t have Hysteresis, are there any risk that boost under voltage flag is unstable near the threshold voltgage?

 

Best Regards.

Streaming between two DM365 using Gstreamer pipeline

$
0
0

Hi all

I want to send video via RTP to another DM365, Im using the following pipelines:

Sender
gst-launch -v v4l2src input-src=composite always-copy=false ! video/x-raw-yuv, format='(fourcc)'NV12, framerate=\(fraction\)30000/1001, width=640, height=480 ! dmaiperf ! queue ! TIVidenc1 codecName=h264enc engineName=codecServer ! rtph264pay ! udpsink host=192.168.1.106 port=5000

Receiver
gst-launch -v udpsrc port=5000 ! ! rtph264depay ! TIViddec2 displayBuffer=true codecName=h264dec engineName=codecServer ! queue ! TIDmaiVideoSink videoStd=D1_NTSC videoOutput=composite sync=false contiguousInputFrame=true

The pipelines above do not give me any error, but then I execute them it shows me nothing.

Does anyboy have the correct pipilines?

Streaming live video between two DM365s - performance issues

$
0
0

I'm trying to stream live video between two DM365s, and at the receiving end, it's prohibitively slow. I was wondering if anyone could help me construct a better GStreamer pipeline that will work for true live streaming, or suggest code modifications if necessay.

I am using version 4.02 of the SDK, developing on an Ubuntu 10.04 host.

Here are my boot arguments on the 365s:

Sender:

bootargs=console=ttyS0,115200n8 rw mem=65M video=davincifb:vid0=OFF:vid1=OFF:osd0=640x480x32,4050K dm365_imp.oper_mode=0 davinci_enc_mngr.ch0_output=LCD davinci_enc_mngr.ch0_mode=640x480 vpfe_capture.cont_bufoffset=0 vpfe_capture.cont_bufsize=6291456 root=/dev/nfs nfsroot=<nfs host>:/home/dm365 ip=dhcp

(Note: The sender's GStreamer and DMAI codes were modified to support displaying video on an LCD, but this is not the board I'm trying to display on at this time.)

Receiver:

bootargs=console=ttyS0,115200n8 rw mem=65M video=davincifb:vid0=OFF:vid1=OFF:osd0=720x576x16,4050K dm365_imp.oper_mode=0 davinci_capture.device_type=4 vpfe_capture.cont_bufsize=6291456 davinci_enc_mngr.ch0_output=COMPOSITE davinci_enc_mngr.ch0_mode=NTSC root=/dev/nfs nfsroot=<nfs host>:/home/dm365 ip=dhcp

Here are my gst-launch commands:

Sender: with a camera hooked up to the composite input:

gst-launch -v \
v4l2src input-src=composite always-copy=false \
! video/x-raw-yuv, format=\(fourcc\)NV12, framerate=\(fraction\)30000/1001, \
    width=640, height=480 \
! dmaiperf \
! queue \
! TIVidenc1 codecName=h264enc engineName=codecServer \
! rtph264pay \
! udpsink host=<receiver IP> port=5000

Receiver:

CAPS='"application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96"'
gst-launch -v \
    udpsrc port=5000 caps=$CAPS \
    ! rtph264depay \
    ! TIViddec2 displayBuffer=true codecName=h264dec engineName=codecServer \
    ! queue \
    !  TIDmaiVideoSink \
        videoStd=D1_NTSC \
        videoOutput=composite \
        sync=false \
        contiguousInputFrame=true

I don't see anything odd (no warnings/errors) in the buffer display and caps output:

Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstRtpJitterBuffer:rtpjitterbuffer0.GstPad:src: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96
/GstPipeline:pipeline0/GstRtpJitterBuffer:rtpjitterbuffer0.GstPad:sink: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96
/GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0.GstPad:src: caps = video/x-h264
/GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0.GstPad:sink: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96
/GstPipeline:pipeline0/GstTIViddec2:tividdec20.GstPad:sink: caps = video/x-h264
/GstPipeline:pipeline0/GstTIViddec2:tividdec20.GstPad:src: caps = video/x-raw-yuv, format=(fourcc)NV12, framerate=(fraction)30000/1001, width=(int)1280, height=(int)720
[B                     |                                                      ]
[RW                    |                                                      ]
[R-W                   |                                                      ]
[R--W                  |                                                      ]
[R---W                 |                                                      ]
[R----W                |                                                      ]
[R-----W               |                                                      ]
[R------W              |                                                      ]
[R-------W             |                                                      ]
[R--------W            |                                                      ]
[R---------W           |                                                      ]
[R----------W          |                                                      ]
[R-----------W         |                                                      ]
[R------------W        |                                                      ]
[R-------------W       |                                                      ]
[R--------------W      |                                                      ]
[R---------------W     |                                                      ]
[R----------------W    |                                                      ]
[R-----------------W   |                                                      ]
[R------------------W  |                                                      ]
[R-------------------W |                                                      ]
[R--------------------W|                                                      ]
[R=====================W                                                      ]
[R=====================|W                                                     ]
[R=====================|=W                                                    ]
[R=====================|==W                                                   ]
[R=====================|===W                                                  ]
[R=====================|====W                                                 ]
/GstPipeline:pipeline0/GstTIViddec2:tividdec20.GstPad:src: caps = video/x-raw-yuv, format=(fourcc)NV12, framerate=(fraction)30000/1001, width=(int)640, height=(int)480
/GstPipeline:pipeline0/GstTIDmaiVideoSink:tidmaivideosink0.GstPad:sink: caps = video/x-raw-yuv, format=(fourcc)NV12, framerate=(fraction)30000/1001, width=(int)640, height=(int)480

At this point, the receiver displays video, but it takes around 45 seconds for video to display, and there's a 1-2 second delay between printing each line of the buffer status!

I found this thread explaining how the TIViddec2 buffer works:

https://gstreamer.ti.com/gf/project/gstreamer_ti/forum/%3C/?_forum_action=ForumMessageBrowse&thread_id=3676&action=ForumBrowse&forum_id=187

If I am understanding correctly, it sounds like nothing will display on the receiver until TIViddec2's internal buffer has filled up enough, which is what happens at the "|====W" line when the video plays.

However, again, it takes around 45 seconds for the buffer to fill up and for the video to display. This is way, way too slow for a "live" stream. When I play a pre-recorded video file on the 365 (over NFS), the video buffer is filled up immediately (or close enough), and the video displays right away. When I send a live video stream from a 365 to my Ubuntu host, the stream experiences a delay of only around half a second, one second maximum.

Here's my command to decode a movie on a 365:

gst-launch -v \
filesrc location=/usr/share/ti/data/videos/davincieffect.264 \
! TIViddec2 codecName=h264dec engineName=codecServer \
! dmaiperf print-arm-load=TRUE \
! TIDmaiVideoSink useUserptrBufs=TRUE \
    displayStd=v4l2 displayDevice=/dev/video2 \
    videoStd=D1_NTSC videoOutput=composite sync=false

Here's my command to receive live video (from a 365) on the Ubuntu host:

CAPS="application/x-rtp, format=(fourcc)NV12, framerate=(fraction)30000/1001, width=(int)640, height=(int)480"
gst-launch -v \
   udpsrc port=5000 caps="$CAPS" \
   ! rtph264depay \
   ! ffdec_h264 \
   ! ffmpegcolorspace \
   ! queue \
   ! autovideosink sync=false

Since there is no delay when sending from a 365 to the Ubuntu host, and since there is no delay when playing a pre-recorded video on a 365 (e.g. I know the decoder's buffer is capable of filling up quickly), I am baffled by the slow performance when streaming between two 365s. Is there an intermediate buffer that's waiting to fill up before passing data along to the decoder? Is the issue on the udpsrc or rtp264depay element on the receiver? Or is this an issue with TIViddec2? Is there a way to configure TIViddec2 to be more responsive? Or should I try a different decoder?

Any suggestions, including code modifications, are greatly appreciated!

(ps. sorry if this should have gone in a GStreamer forum. I recently submitted a similar question to support, and I was directed to this forum.)

Gstreamer running with DVSDK 4.01 DM365 EVM - Streaming between two EVM boards (fatal bit error)

$
0
0

We are wanting to stream between 2 EVM boards. One to capture and encode the audio/video. The other to display it.

Today we were able to get this working pretty well -- for a while. The decoder side (client) would crash after a couple of seconds sometime and after minutes other times.

 

The error was

 

ERROR: from element /GstPipeline:pipeline0/GstTIViddec2:tividdec20: fatal bit error

The Commands we used were.

Encoder/ Server

Normal 0 false false false EN-US X-NONE X-NONE

gst-launch v4l2src always-copy=FALSE !'video/x-raw-yuv,format=(fourcc)NV12,framerate=(fraction)30/1, width=(int)1280, height=(int)720' ! TIVidenc1 engineName=codecServer codecName=h264enc contiguousInputFrame=TRUE ! rtph264pay pt=96 config-interval=1 ! udpsink host=175.14.0.209 port=5000 sync=false

 

Decoder/Client

gst-launch -v udpsrc port=5000 caps="application/x-rtp,media=(string)video, payload=96, clock-rate=90000" ! rtph264depay ! typefind ! TIViddec2 codecName=h264dec engineName=codecServer ! TIDmaiVideoSink useUserptrBufs=true numBufs=3 videoStd=720P_60 videoOutput=componentsync=false hideOSD=true

 

We start the Client first then the Server.After 5 seconds the video starts playing. After a couple minutes (random time) the decoder exits with an error

 

 

 

Output:

 

gst-launch -v udpsrc port=5000 caps="application/x-rtp,media=(

string)video, payload=96, clock-rate=90000" ! rtph264depay ! typefind ! TIViddec

2 codecName=h264dec engineName=codecServer ! TIDmaiVideoSink useUserptrBufs=true

numBufs=3 videoStd=720P_60 videoOutput=component sync=false hideOSD=true

Setting pipeline to PAUSED ...

/GstPipeline:pipeline0/GstUDPSrc:udpsrc0.GstPad:src: caps = application/x-rtp, media=(string)video, payloa

d=(int)96, clock-rate=(int)90000, encoding-name=(string)H264

Pipeline is live and does not need PREROLL ...

Setting pipeline to PLAYING ...

New clock: GstSystemClock

/GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0.GstPad:src: caps = video/x-h264

/GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0.GstPad:sink: caps = application/x-rtp, media=(string)

video, payload=(int)96, clock-rate=(int)90000, encoding-name=(string)H264

/GstPipeline:pipeline0/GstTypeFindElement:typefindelement0.GstPad:src: caps = video/x-h264

/GstPipeline:pipeline0/GstTypeFindElement:typefindelement0.GstPad:sink: caps = video/x-h264

/GstPipeline:pipeline0/GstTIViddec2:tividdec20.GstPad:sink: caps = video/x-h264

/GstPipeline:pipeline0/GstTIViddec2:tividdec20.GstPad:src: caps = video/x-raw-yuv, format=(fourcc)NV12, fr

amerate=(fraction)30000/1001, width=(int)1280, height=(int)720

*************NOTE: THIS IS WHERE IS STOPS WHEN IT HANGS….******************

/GstPipeline:pipeline0/GstTIDmaiVideoSink:tidmaivideosink0.GstPad:sink: caps = video/x-raw-yuv, format=(fo

urcc)NV12, framerate=(fraction)30000/1001, width=(int)1280, height=(int)720

davinci_v4l2 davinci_v4l2.1: Before finishing with S_FMT:

layer.pix_fmt.bytesperline = 1280,

layer.pix_fmt.width = 1280,

 layer.pix_fmt.height = 720,

 layer.pix_fmt.sizeimage =1382400

davinci_v4l2 davinci_v4l2.1: pixfmt->width = 1280,

layer->layer_info.config.line_length= 1280

*************NOTE: Starts playing normal video

 

ERROR: from element /GstPipeline:pipeline0/GstTIViddec2:tividdec20: fatal bit error

 

Additional debug info:

gsttividdec2.c(1635): gst_tividdec2_decode_thread (): /GstPipeline:pipeline0/GstTIViddec2:tividdec20

Execution ended after 151664182917 ns.

Setting pipeline to PAUSED ...

Setting pipeline to READY ...

/GstPipeline:pipeline0/GstTIDmaiVideoSink:tidmaivideosink0.GstPad:sink: caps = NULL

/GstPipeline:pipeline0/GstTIViddec2:tividdec20.GstPad:src: caps = NULL

/GstPipeline:pipeline0/GstTIViddec2:tividdec20.GstPad:sink: caps = NULL

/GstPipeline:pipeline0/GstTypeFindElement:typefindelement0.GstPad:src: caps = NULL

/GstPipeline:pipeline0/GstTypeFindElement:typefindelement0.GstPad:sink: caps = NULL

/GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0.GstPad:src: caps = NULL

/GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0.GstPad:sink: caps = NULL

/GstPipeline:pipeline0/GstUDPSrc:udpsrc0.GstPad:src: caps = NULL

Setting pipeline to NULL ...

Freeing pipeline ...

 

 

End of debug

 

What can be done to fix the fatal bit error?

 

Also how do we fix the problem where the the Client MUST be running before the client?

Thanks


Bill


How to make DSS script delay for 5 seconds?

$
0
0

I'm using CC6 and DSS script to load and run my program on DRA7xx. I would like to run my program for 5 seconds, and then halt it. There are 9 processors in my test. I call Target.runAsync() on each of the 9 processors.

After getting them all running, I would like my script to sleep for 5 seconds. Then I would call Target.halt() on each processor. How do I make my DSS script sleep for 5 seconds?

I've tried using setTimeout() as the recommended way to make JavaScript sleep, but this comes up as undefined.

Thanks
~Ramsey

GLSDK ti-glsdk_dra7xx-evm_6_10_00_02 build problem

$
0
0

On 64 bit Ubuntu 12.04 LTS VM (in virtual box in 64 bit Ubuntu 12.04 LTS):

during build:

./build-core-sdk.sh dra7xx-evm

ARNING: Failed to fetch URL ftp://ftp.alsa-project.org/pub/utils/alsa-utils-1.0.27.2.tar.bz2, attempting MIRRORS if available
WARNING: Failed to fetch URL http://0pointer.de/lennart/projects/nss-mdns/nss-mdns-0.10.tar.gz, attempting MIRRORS if available
WARNING: Failed to fetch URL git://git.omapzoom.org/kernel/omap.git;protocol=git;branch=p-ti-linux-3.12.y, attempting MIRRORS if available
ERROR: Fetcher failure: Fetch command failed with exit code 128, output:
Cloning into bare repository '/home/javad/GLSDK/yocto-layers/downloads/git2/git.omapzoom.org.kernel.omap.git'...

fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

ERROR: Function failed: Fetcher failure for URL: 'git://git.omapzoom.org/kernel/omap.git;protocol=git;branch=p-ti-linux-3.12.y'. Unable to fetch URL from any source.
ERROR: Logfile of failure stored in: /home/javad/GLSDK/yocto-layers/build/arago-tmp-external-linaro-toolchain/work/dra7xx_evm-oe-linux-gnueabi/linux-ti-glsdk/3.12.25-r3d+gitrAUTOINC+be43a19946/temp/log.do_fetch.19839
ERROR: Task 90 (/home/javad/GLSDK/yocto-layers/sources/meta-glsdk/recipes-kernel/linux/linux-ti-glsdk_3.12.bb, do_fetch) failed with exit code '1'

Reason I tried to build inside the VM is that I got this error on my host with both ti-glsdk_dra7xx-evm_6_10_00_01

and ti-glsdk_dra7xx-evm_6_10_00_02

ERROR: Function failed: do_rootfs (log file is located at /home/javad/GLSDK/yocto-layers/build/arago-tmp-external-linaro-toolchain/work/dra7xx_evm-oe-linux-gnueabi/arago-glsdk-multimedia-image/1.0-r0/temp/log.do_rootfs.28302)
ERROR: Task 8 (/home/javad/GLSDK/yocto-layers/sources/meta-glsdk/meta-arago-distro/recipes-core/images/arago-glsdk-multimedia-image.bb, do_rootfs) failed with exit code '1'
NOTE: Tasks Summary: Attempted 5474 tasks of which 5473 didn't need to be rerun and 1 failed.

I also build the same ti-glsdk_dra7xx-evm_6_10_00_02 on a 32 bit Ubuntu 12.04 VM (under same host). This had no problem and completed.

DAC8831EVM: help with strange noise

$
0
0

I am trying to use the DAC8831, and made a PCB based on the DAC8831EVM schematic.  To test it, I supplied +/-5V and looked at the ouptut, which should be the result of the power-on reset.  However, I saw a well-defined pattern, which I think is noise.  See attached figure.  The measurement was taken AC-coupled.  After trying a few things, I decided to look at the DAC8831EVM, for reference.  However, the same "noise" also exists on the DAC8831EVM output.  I also tried changing the power supply, to ensure that the analog was supplied from a linear power supply, instead of a switching power supply.  I also tried to ground !LDAC and pull-up !CS, which didn't have an effect.  On the DAC8831EVM, the repeating pattern might go away when the switch is set to unipolar.

Would someone please help me understand what is going on, and whether it is possible to remove this noise signal?  Is it actually noise?  I am assuming it is noise because a 20mV signal compared to a 2V range would be around 8-bit resolution, far worse than the 16-bit resolution of this dac.  I know that it is very difficult to get the full 16-bits, but I am hoping to do better than the current performance of 20mV out of 2.5V.

Can someone please help?

Ultrasound AFE eval/dev platforms: LM965xx

$
0
0

I have been looking at LM965XX chip set, but still have questions on best way to evaluate them in a system (TX-SDK-V2, LM96511EVK, AFE5803EVM, …..).

Is there a recommended platform to start with?

-Jaden

 

 

Problems with SSI2 on tm4c123gh6pm

$
0
0

Hi, I’m using the tm4c123gh6pm lauchpad, and I’m trying to activate the SPI. The SSI0 is working perfect, I can see all the signals in my logic analyzer. However I need to use the SSI2 because the pinout I need. When I configure SSI2,  I can see SS2Tx in my logic analyzer, but there is no clock output! I realized that PB4 share functionality with AIN10 maybe I need to do an extra configuration somebody can help me?

Regards.

Viewing all 262198 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>