Previous Page Table of Contents Next Page

Overview over our approach to modelling an inner language

In the previous part on basic action I showed a data-driven approach to action by means of joint rotation. This approach could model a simple stride, a laterally symmetric second stride, and repetitions. We could even coordinate arm and leg movements for our walk. However, skilled action involves more complex sequences that cannot easily be captured in a single data frame. The next idea would be to have sequences of successive data frames, but that is rather awkward and unwieldy.

Another drawback to this approach is that skills do not easily decompose into skill components. For instance, for our walk, we would need a separate skill for walking with both arms hanging or with hands in the pockets. It seems much more likely that arm movements are a separate component from the leg movements in the stride.

Another drawback is that the dataframe representation is too specific. We would need additional frames for walking in a curve or around a corner, and for walking uphill or downhill. It seems more likely that the skill representation is more general and it is automatically or semi-automatically adapted to different circumstances.

Yet another consideration that will be addressed down the road is the integration of perception into skilled action.

We propose an 'inner language' as a way of supplementing the dataframes.

Previous Page Table of Contents Next Page