SCIENTIA SINICA Informationis, Volume 49 , Issue 2 : 229-244(2019) https://doi.org/10.1360/N112018-00204

Automatic generation of Labanotation for national dynamic art digitalization

More info
  • ReceivedNov 25, 2018
  • AcceptedJan 9, 2019
  • PublishedFeb 18, 2019


Funded by






本文感谢罗秉钰专家在拉班舞谱方面的耐心指导, 感谢李松专家在民间文化与技术结合方面的巨大帮助, 感谢文化部民族民间文艺发展中心在设备和数据上的大力支持.




问题1: 根据你的经验, 系统生成的拉班舞谱的准确率大概是多少?

回答1: 68%$\sim~$93%

问题2: 对于一个拉班舞谱记录任务, 你会选择用本系统生成的拉班舞谱作为参考吗? 如果会, 对你的帮助有哪些?

回答2: 8人选择会. “帮助”归纳如下: 有辅助作用, 可以为记录舞谱提供思路, 缩短记录拉班舞谱的时间; 对于有歧义性的动作可以提供参考.

问题3: (看完一段运动捕捉数据完成舞谱记录后)根据你的经验, 记录的拉班舞谱准确率大概是多少? 在生成系统的辅助下, 记录的准确率大概是多少?

回答3: 70%$\sim~$90% (自己记录); 75%$\sim~$93% (系统辅助).

问题4: 反馈意见.

回答4: (1) 生成的舞谱能够反映整体性的动作特点、节奏, 因此可以为记录任务提供思路、减少工作量, 效率提升约20%$\sim~$50% (2) 系统只能处理一个人的运动数据, 对于具有交互性的双人动作无法处理; 对于简单的节奏分明的动作处理的较好, 对于复杂的旋转动作处理的不好.


[1] Guo H. Research on automatic generation of Labanotation based on human motion capture data. Dissertation for Master Degree. Bejing: Beijing Jiaotong University, 2015. Google Scholar

[2] Hutchinson A. Labanotation. J Am Folklore, 1955, 68: 89. Google Scholar

[3] Guest A H. Labanotation or Kinetography Laban: the System of Analyzing and Recording Movement. 3rd ed. New York: Theatre Arts Books, 1970. Google Scholar

[4] Xiang Z R, Zhi J Y, Xu B C, et al. Motion capture technology and its application research review. Comput Appl Res, 2013, 30: 2241--2245. Google Scholar

[5] Johansson G. Visual perception of biological motion and a model for its analysis. Perception Psychophysics, 1973, 14: 201-211 CrossRef Google Scholar

[6] Villegas R, Yang J M, Ceylan D, et al. Neural kinematic networks for unsupervised motion retargetting. 2018,. arXiv Google Scholar

[7] Meredith M, Maddock S. Motion capture file formats explained. Department of Computer Science, University of Sheffield, 2001. Google Scholar

[8] Liang Q H. Research on key technology of motion capture in digital dynamic art. Dissertation for Ph.D. Degree. Beijing: Beijing Jiaotong University, 2016. Google Scholar

[9] Hachimura K, Nakamura M. Method of generating coded description of human body motion from motion-captured data. In: Proceedings of the 10th IEEE International Workshop on Robot and Human Interactive Communication, 2001. 122--127. Google Scholar

[10] Chen H, Qian G, James J. An autonomous dance scoring system using marker-based motion capture. In: Proceedings of the 7th Workshop on Multimedia Signal Processing, 2005. Google Scholar

[11] Choensawat W, Nakamura M, Hachimura K. GenLaban: A tool for generating Labanotation from motion capture data. Multimed Tools Appl, 2015, 74: 10823-10846 CrossRef Google Scholar

[12] Guo H, Miao Z J, Zhu F Y, et al. Automatic labanotation generation based on human motion capture data. In: Proceedings of Chinese Conference on Pattern Recognition, 2014. 426--435. Google Scholar

[13] Zhou Z M, Miao Z J, Wang J J. A system for automatic generation of labanotation from motion capture data. In: Proceedings of the 13th International Conference on Signal Processing (ICSP), 2016. 1031--1034. Google Scholar

[14] Zhou Z M. Research on automatic generation of labanotation based on dynamic programming. Dissertation for Master Degree. Bejing: Beijing Jiaotong University, 2017. Google Scholar

[15] Yu T, Shen X, Li Q. Motion retrieval based on movement notation language. Comp Anim Virtual Worlds, 2005, 16: 273-282 CrossRef Google Scholar

[16] Shen X J, Li Q L, Yu T, et al. Mocap data editing via movement notations. In: Proceedings of International Conference on Computer Aided Design and Computer Graphics, 2006. 463--470. Google Scholar

[17] Wang J, Miao Z, Guo H. Using automatic generation of Labanotation to protect folk dance. J Electron Imag, 2017, 26: 011028 CrossRef ADS Google Scholar

[18] Huang G B, Zhu Q Y, Siew C K. Extreme learning machine: Theory and applications. Neurocomputing, 2006, 70: 489-501 CrossRef Google Scholar

[19] Huang G B, Zhou H M, Ding X J. Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern B, 2012, 42: 513-529 CrossRef PubMed Google Scholar

[20] Huang Z, Yu Y, Gu J. An Efficient Method for Traffic Sign Recognition Based on Extreme Learning Machine.. IEEE Trans Cybern, 2017, 47: 920-933 CrossRef PubMed Google Scholar

[21] Li M, Miao Z. Automatic Labanotation generation from motion-captured data based on hidden Markov models. In: Proceedings of the 4th Asia Conference on Pattern Recognition, 2017. Google Scholar

[22] Guest A H. Laban Recording Method: Action Analysis and Recording System. Beijing: China Foreign Translation and Publishing Co., Ltd., 2013. Google Scholar

  • Figure 1

    Exampel of Labanotation with 4 pages

  • Figure 2

    Structure of Labanotation. The “L”, “C” and “R”, representleft, center and right, respectively

  • Figure 3

    27 basic symbols of Labanotation and the corresponding spatialpartition

  • Figure 4

    (Color online) Flow chart of generating Labanotation based onhuman motion capture data

  • Figure 5

    2/4 beat rhythm of Labanotation

  • Figure 6

    (Color online) Body plane and vector that represents the front ofhuman body

  • Figure 7

    (Color online) Generated Labanotation based on the motion capturedata of drum Yangko dance (partially modified)

  • Figure 8

    (Color online) Comparison of original human motion and thecorresponding generated Labanotation

  • Figure 9

    (Color online) Video screenshots of traditional routine clips ofShandong drum Yangko that synthesized with video data (three channels),motion capture data and generated Labanotation. There are nine screenshots,each of which is a live video shot from three different angles on the left,with motion capture data in the middle and corresponding Labanotation on theright

  • Figure 10

    Comparison of expert records and generated Labanotation of sixkinds of basic motion. (a) Go forward;protect łinebreak (b) go right forward; (c) forward low, right low; (d) forward low, origin low; (e) forward, right, backward; (f) backward, left, forward

  • Table 1   Relationships between angle $\alpha~$ and the horizontal direction ofLabanotation
    Value of angle $\alpha~$ Horizontal direction
    $[-22.5^\circ,~22.5^\circ]$ Forward
    $(22.5^\circ,~67.5^\circ]$ Left forward
    $(67.5^\circ,~112.5^\circ]$ Left
    $(112.5^\circ,~157.5^\circ]$ Left back
    $(157.5^\circ,~180^\circ]~\cup~[-180^\circ,~-157.5^\circ)$ Back
    $[-157.5^\circ,~-112.5^\circ)$ Right back
    $[-112.5^\circ,~-67.5^\circ)$ Right
    $[-67.5^\circ,~-22.5^\circ)$ Right forward
  • Table 2   Relationships between angle $\beta~$ and the vertical direction ofLabanotation
    Absolute value of angle $\beta~$ Vertical direction
    $[0^\circ,~30^\circ]$ High
    $(30^\circ,~150^\circ]$ Middle
    $(150^\circ,~180^\circ]$ Low
  • Table 3   Comparison of the approach based on rules , template , HMM and our method
    Accuracy (% Rules [12] Template [13] HMM [21] Ours
    Left arm 80.25 71.03 83.69
    Right arm 82.50 73.42 83.17
    Left leg 64.23 85.83 87.09 88.72
    Right leg 60.71 83.90 86.62 86.24
    Weighted average 68.37 80.90 86.20