DETAILED ACTION
Amendment received 30 December 2025 is acknowledged. Claims 1-20 are pending and have been considered as follows.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-16 and 18-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ng-Thow-Hing (US Pub. No. 2012/0191460).
As per Claim 1, Ng-Thow-Hing discloses a robot (100) (Figs. 1-2; ¶39, 43) comprising:
a camera (220, 240) (Fig. 2; ¶43, 48-49);
a speaker (266) (Fig. 2; ¶43, 50);
at least one motor (250) (Fig. 2; ¶38-41, 43, 46-47);
a memory (330) storing one or more computer programs (as per “software, firmware, hardware or a combination thereof” in ¶42, as per “hardware, software, firmware or a combination thereof” in ¶43) (Figs. 2-3A; ¶42-44, 52, 56); and
one or more processors (310) communicatively coupled to the camera (220, 240), the speaker (266), the motor (250) and the memory (330) (Figs. 2-3A; ¶43, 52-53, 56),
wherein the one or more computer programs (as per “software, firmware, hardware or a combination thereof” in ¶42, as per “hardware, software, firmware or a combination thereof” in ¶43) include computer-executable instructions (as per “instructions” in ¶53, 56) that, when executed by the one or more processors (310) individually or collectively, cause the robot (100) to:
detect a user (as per “point to an object or person” in ¶49) based on an image (as per “map the locations of events or objects surrounding the robot 100 into a panoramic coordinate system” in ¶49) obtained through the camera (220, 240) (Fig. 2; ¶43, 48-49),
obtain a plurality of sentences (as per 204/336) to be uttered (via speaker 266) by the robot (100) to the user (as per “point to an object or person” in ¶49) (Figs. 2, 3A-B; ¶43-45, 52, 55, 58-62),
identify a first sentence (as per first sentence of original speech text 336 to be analyzed by grammar modules 340A-N) to which a motion to be performed (as per gesture identifier 352 to be selected by gesture selection 350 from gesture identifiers 342A-N) while the robot (100) utters (via speaker 266) is not allocated among the plurality of sentences (as per 204/336) (Figs. 2, 3A-B; ¶43-66),
identify a second sentence (as per second sentence of original speech text 336 to be analyzed by grammar modules 340A-N) including a pre-allocated second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62), the second sentence (as per second sentence of original speech text 336 to be analyzed by grammar modules 340A-N) to be uttered (as per speaker 266) in a second time section adjacent (as per second sentence adjacent first sentence in “analyzed timing of speech elements” in ¶92) to a first time section (as per first sentence adjacent second sentence in “analyzed timing of speech elements” in ¶92) in which the first sentence (as per first sentence of original speech text 336 to be analyzed by grammar modules 340A-N) is to be uttered (via speaker 266), the pre-allocated second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) being pre-allocated when identifying the second sentence (as per second sentence of original speech text 336 to be analyzed by grammar modules 340A-N) (Figs. 2, 3A-B, 7A-B; ¶43-66, 87-92),
identify one or more motions (as per motion mapped to gesture identifier 352) having a similarity (as per “repetitive motion” in ¶82, as per “F is the frequency of a gesture element … that is repetitive” in ¶84) to the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) less than a predetermined value (as per detection of “repetitive motion” in ¶82, 84) among a plurality of pre-stored motions (as per “motion template database 620”) in the memory (330) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95);
obtain a first motion (as per motion mapped to gesture identifier 352 for first sentence) different from the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) pre-allocated to the second sentence (as per second sentence of original speech text 336 to be analyzed by grammar modules 340A-N) among the identified one or more motions (as per motion mapped to gesture identifier 352 for first sentence) as a motion corresponding to the first sentence (as per first sentence of original speech text 336 to be analyzed by grammar modules 340A-N) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95), and
control the speaker (266) to output a voice corresponding to the first sentence (as per first sentence of original speech text 336 to be analyzed by grammar modules 340A-N) and control the at least one motor (250) to perform the first motion (as per motion mapped to gesture identifier 352 for first sentence) while the voice is output (Figs. 2, 3A-B; ¶43-66).
As per Claim 2, Ng-Thow-Hing further discloses wherein the one or more computer programs (as per “software, firmware, hardware or a combination thereof” in ¶42, as per “hardware, software, firmware or a combination thereof” in ¶43) further include computer-executable instructions (as per “instructions” in ¶53, 56) that, when executed by the one or more processors (310) individually or collectively, cause the robot (100) to:
obtain the first motion (as per motion mapped to gesture identifier 352 for first sentence) by randomly selecting (as per “motion controller 640 receives a random number 622 from the motion randomizer 630 to afford randomness to the trajectory” in ¶83) one of the identified one or more motions (as per motion mapped to gesture identifier 352) (Figs. 6, 7A-B; ¶74-95).
As per Claim 3, Ng-Thow-Hing further discloses wherein the one or more computer programs (as per “software, firmware, hardware or a combination thereof” in ¶42, as per “hardware, software, firmware or a combination thereof” in ¶43) further include computer-executable instructions (as per “instructions” in ¶53, 56) that, when executed by the one or more processors (310) individually or collectively, cause the robot (100) to obtain similarities (as per “repetitive motion” in ¶82, as per “F is the frequency of a gesture element … that is repetitive” in ¶84) between the plurality of pre-stored motions (as per “motion template database 620”) and the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) by comparing histograms (as per data corresponding to “repetitive motion” in ¶82, as per data corresponding to “F is the frequency of a gesture element … that is repetitive” in ¶84) corresponding to the plurality of pre-stored motions (as per “motion template database 620”) and a histogram (as per data corresponding mapped to gesture identifier 352 for second sentence) corresponding to the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95).
As per Claim 4, Ng-Thow-Hing further discloses
wherein the memory (330) stores a similarity table (as per organized data corresponding to “repetitive motion” in ¶82, as per data corresponding to “F is the frequency of a gesture element … that is repetitive” in ¶84) including similarities (as per “repetitive motion” in ¶82, as per “F is the frequency of a gesture element … that is repetitive” in ¶84) between the plurality of pre-stored motions (as per “motion template database 620”) including the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62), and
wherein the one or more computer programs (as per “software, firmware, hardware or a combination thereof” in ¶42, as per “hardware, software, firmware or a combination thereof” in ¶43) further include computer-executable instructions (as per “instructions” in ¶53, 56) that, when executed by the one or more processors (310) individually or collectively, cause the robot (100) to obtain the first motion (as per motion mapped to gesture identifier 352 for first sentence) based on the similarity table (as per organized data corresponding to “repetitive motion” in ¶82, as per data corresponding to “F is the frequency of a gesture element … that is repetitive” in ¶84) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95).
As per Claim 5, Ng-Thow-Hing further discloses wherein the one or more computer programs (as per “software, firmware, hardware or a combination thereof” in ¶42, as per “hardware, software, firmware or a combination thereof” in ¶43) further include computer-executable instructions (as per “instructions” in ¶53, 56) that, when executed by the one or more processors (310) individually or collectively, cause the robot (100) to:
identify a third motion (as per motion mapped to gesture identifier 352 for third sentence) allocated to a third sentence (as per third sentence of original speech text 336 to be analyzed by grammar modules 340A-N) to be uttered (as per speaker 266) in a third time section that is adjacent to the first time section (as per third sentence adjacent first sentence in “analyzed timing of speech elements” in ¶92) and different from the second time section (as per second sentence adjacent first sentence in “analyzed timing of speech elements” in ¶92) (Figs. 2, 3A-B, 7A-B; ¶43-66, 87-92),
obtain first similarities (as per first evaluated “repetitive motion” in ¶82, as per first evaluated “F is the frequency of a gesture element … that is repetitive” in ¶84) between the plurality of pre-stored motions (as per “motion template database 620”) and the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) and obtain second similarities (as per second evaluated “repetitive motion” in ¶82, as per second evaluated “F is the frequency of a gesture element … that is repetitive” in ¶84) between the plurality of pre-stored motions (as per “motion template database 620”) and the third motion (as per motion mapped to gesture identifier 352 for third sentence) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95),
obtain a plurality of average values (as per “Ci is for continuity” and “continuity refers to the sharpness in change between the incoming and outgoing tangent vectors at each point” for evaluating each motion relative to adjacent motions in ¶84) of the first similarities (as per first evaluated “repetitive motion” in ¶82, as per first evaluated “F is the frequency of a gesture element … that is repetitive” in ¶84) and the second similarities (as per second evaluated “repetitive motion” in ¶82, as per second evaluated “F is the frequency of a gesture element … that is repetitive” in ¶84) corresponding to the plurality of pre-stored motions (as per “motion template database 620”) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95),
identify one or more average values (as per “Ci is for continuity” and “continuity refers to the sharpness in change between the incoming and outgoing tangent vectors at each point” for evaluating each motion relative to adjacent motions in ¶84) less than a predetermined value (as per “a high value may be set for continuity to create smooth motions” in ¶85) among the plurality of average values (as per “Ci is for continuity” and “continuity refers to the sharpness in change between the incoming and outgoing tangent vectors at each point” for evaluating each motion relative to adjacent motions in ¶84) and identify one or more motions corresponding to the one or more average values (as per “Ci is for continuity” and “continuity refers to the sharpness in change between the incoming and outgoing tangent vectors at each point” for evaluating each motion relative to adjacent motions in ¶84) among the plurality of pre-stored motions (as per “motion template database 620”) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95), and
identify one of the one or more motions randomly (as per random number 622) and obtain the motion as the first motion (as per motion mapped to gesture identifier 352 for first sentence) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95).
As per Claim 6, Ng-Thow-Hing further discloses wherein the one or more computer programs (as per “software, firmware, hardware or a combination thereof” in ¶42, as per “hardware, software, firmware or a combination thereof” in ¶43) further include computer-executable instructions (as per “instructions” in ¶53, 56) that, when executed by the one or more processors (310) individually or collectively, cause the robot (100) to:
obtain a size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) based on information (as per “style parameters” in ¶84) on the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95),
based on a ratio (as per “preparation motion from a prior gesture motion or a starting pose to an initial position for the current gesture motion” in ¶92) of a size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the first motion (as per motion mapped to gesture identifier 352 for first sentence) to the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) being greater than a threshold value, adjust (as per “the motion generator 230 may modify the gesture as defined by the gesture descriptor to make motions appear more natural” in ¶92) the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the first motion (as per motion mapped to gesture identifier 352 for first sentence) based on the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the first motion (as per motion mapped to gesture identifier 352 for first sentence) and the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95), and
based on the ratio (as per “preparation motion from a prior gesture motion or a starting pose to an initial position for the current gesture motion” in ¶92) of the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the first motion (as per motion mapped to gesture identifier 352 for first sentence) to the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) being less or equal to the threshold value, maintain (as per “The motion generator 230 plans 730 a gesture motion based on the analyzed timing of speech elements, the gesture descriptor and a previous gesture (if any)” in ¶92) the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the first motion (as per motion mapped to gesture identifier 352 for first sentence) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95).
As per Claim 7, Ng-Thow-Hing further discloses wherein the one or more computer programs (as per “software, firmware, hardware or a combination thereof” in ¶42, as per “hardware, software, firmware or a combination thereof” in ¶43) further include computer-executable instructions (as per “instructions” in ¶53, 56) that, when executed by the one or more processors (310) individually or collectively, cause the robot (100) to:
calculate a weight (as per “The motion generator 230 plans 730 a gesture motion based on the analyzed timing of speech elements, the gesture descriptor and a previous gesture” in ¶92) based on the ratio (as per “preparation motion from a prior gesture motion or a starting pose to an initial position for the current gesture motion” in ¶92) of the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the first motion (as per motion mapped to gesture identifier 352 for first sentence) to the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95), and
adjust (as per “the motion generator 230 may modify the gesture as defined by the gesture descriptor to make motions appear more natural” in ¶92) the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the first motion (as per motion mapped to gesture identifier 352 for first sentence) based on the weight (as per “The motion generator 230 plans 730 a gesture motion based on the analyzed timing of speech elements, the gesture descriptor and a previous gesture” in ¶92) so that the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the first motion (as per motion mapped to gesture identifier 352 for first sentence) is less than the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95).
As per Claim 8, Ng-Thow-Hing discloses a method performed by a robot (100) (Figs. 1-2; ¶39, 43), the method comprising:
detecting, by the robot (100), a user (as per “point to an object or person” in ¶49) (Fig. 2; ¶43, 48-49);
obtaining, by the robot (100), a plurality of sentences (as per 204/336) to be uttered (via speaker 266) by the robot (100) to the detected user (as per “point to an object or person” in ¶49) (Figs. 2, 3A-B; ¶43-45, 52, 55, 58-62);
identifying, by the robot (100), a first sentence (as per first sentence of original speech text 336 to be analyzed by grammar modules 340A-N) to which a motion to be performed (as per gesture identifier 352 to be selected by gesture selection 350 from gesture identifiers 342A-N) while the robot (100) utters (via speaker 266) is not allocated among the plurality of sentences (as per 204/336) (Figs. 2, 3A-B; ¶43-66);
identifying, by the robot (100), a second sentence (as per second sentence of original speech text 336 to be analyzed by grammar modules 340A-N) including a pre-allocated second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62), the second sentence (as per second sentence of original speech text 336 to be analyzed by grammar modules 340A-N ) to be uttered (as per speaker 266) in a second time section (as per second sentence adjacent first sentence in “analyzed timing of speech elements” in ¶92) adjacent to a first time section (as per first sentence adjacent second sentence in “analyzed timing of speech elements” in ¶92) in which the first sentence (as per first sentence of original speech text 336 to be analyzed by grammar modules 340A-N) is to be uttered (via speaker 266), the pre-allocated second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) being pre-allocated when identifying the second sentence (as per second sentence of original speech text 336 to be analyzed by grammar modules 340A-N) (Figs. 2, 3A-B, 7A-B; ¶43-66, 87-92),
identifying one or more motions (as per motion mapped to gesture identifier 352) having a similarity (as per “repetitive motion” in ¶82, as per “F is the frequency of a gesture element … that is repetitive” in ¶84) to the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) less than a predetermined value (as per detection of “repetitive motion” in ¶82, 84) among a plurality of pre-stored motions (as per “motion template database 620”) in a memory (330) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95);
obtaining, by the robot (100), a first motion (as per motion mapped to gesture identifier 352 for first sentence) different from the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) pre-allocated to the second sentence (as per second sentence of original speech text 336 to be analyzed by grammar modules 340A-N) among the identified one or more motions (as per motion mapped to gesture identifier 352 for first sentence) as a motion corresponding to the first sentence (as per first sentence of original speech text 336 to be analyzed by grammar modules 340A-N) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95); and
performing, by the robot (100), the first motion (as per motion mapped to gesture identifier 352 for first sentence) while uttering (via speaker 266) the first sentence (as per first sentence of original speech text 336 to be analyzed by grammar modules 340A-N) (Figs. 2, 3A-B; ¶43-66).
As per Claim 9, Ng-Thow-Hing further discloses wherein the obtaining of the first motion (as per motion mapped to gesture identifier 352 for first sentence) comprises:
obtaining the first motion (as per motion mapped to gesture identifier 352 for first sentence) by randomly selecting (as per “motion controller 640 receives a random number 622 from the motion randomizer 630 to afford randomness to the trajectory” in ¶83) one of the identified one or more motions (as per motion mapped to gesture identifier 352) (Figs. 6, 7A-B; ¶74-95).
As per Claim 10, Ng-Thow-Hing further discloses wherein the identifying of the one or more motions (as per “The motion planner 610 also modifies the gestures as defined by the gesture descriptor 612 to retract or blend the current motion with other motions of the robot 100. For example, if a repetitive motion of reaching out an arm is to be taken repeatedly, the motion planner 610 adds a retrieving motion … to make the motions of the robot 100 appear natural.” in ¶82) comprises obtaining similarities (as per “repetitive motion” in ¶82, as per “F is the frequency of a gesture element … that is repetitive” in ¶84) between the plurality of pre-stored motions (as per “motion template database 620”) and the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) by comparing histograms (as per data corresponding to “repetitive motion” in ¶82, as per data corresponding to “F is the frequency of a gesture element … that is repetitive” in ¶84) corresponding to the plurality of pre-stored motions (as per “motion template database 620”) and a histogram (as per data corresponding mapped to gesture identifier 352 for second sentence) corresponding to the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95).
As per Claim 11, Ng-Thow-Hing further discloses
wherein the robot (100) stores a similarity table (as per organized data corresponding to “repetitive motion” in ¶82, as per data corresponding to “F is the frequency of a gesture element … that is repetitive” in ¶84) including similarities (as per “repetitive motion” in ¶82, as per “F is the frequency of a gesture element … that is repetitive” in ¶84) between the plurality of pre-stored motions (as per “motion template database 620”) including the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62), and
wherein the obtaining of the first motion (as per motion mapped to gesture identifier 352 for first sentence) comprises obtaining the first motion (as per motion mapped to gesture identifier 352 for first sentence) based on the similarity table (as per organized data corresponding to “repetitive motion” in ¶82, as per data corresponding to “F is the frequency of a gesture element … that is repetitive” in ¶84) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95).
As per Claim 12, Ng-Thow-Hing further discloses wherein the obtaining of the first motion (as per motion mapped to gesture identifier 352 for first sentence) comprises:
identifying a third motion (as per motion mapped to gesture identifier 352 for third sentence) allocated to a third sentence (as per third sentence of original speech text 336 to be analyzed by grammar modules 340A-N) to be uttered (as per speaker 266) in a third time section that is adjacent to the first time section (as per third sentence adjacent first sentence in “analyzed timing of speech elements” in ¶92) and different from the second time section (as per second sentence adjacent first sentence in “analyzed timing of speech elements” in ¶92) (Figs. 2, 3A-B, 7A-B; ¶43-66, 87-92);
obtaining first similarities (as per first evaluated “repetitive motion” in ¶82, as per first evaluated “F is the frequency of a gesture element … that is repetitive” in ¶84) between the plurality of pre-stored motions (as per “motion template database 620”) and the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) and obtaining second similarities (as per second evaluated “repetitive motion” in ¶82, as per second evaluated “F is the frequency of a gesture element … that is repetitive” in ¶84) between the plurality of pre-stored motions (as per “motion template database 620”) and the third motion (as per motion mapped to gesture identifier 352 for third sentence) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95);
obtaining a plurality of average values (as per “Ci is for continuity” and “continuity refers to the sharpness in change between the incoming and outgoing tangent vectors at each point” for evaluating each motion relative to adjacent motions in ¶84) of the first similarities (as per first evaluated “repetitive motion” in ¶82, as per first evaluated “F is the frequency of a gesture element … that is repetitive” in ¶84) and the second similarities (as per second evaluated “repetitive motion” in ¶82, as per second evaluated “F is the frequency of a gesture element … that is repetitive” in ¶84) corresponding to the plurality of pre-stored motions (as per “motion template database 620”) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95);
identifying one or more average values (as per “Ci is for continuity” and “continuity refers to the sharpness in change between the incoming and outgoing tangent vectors at each point” for evaluating each motion relative to adjacent motions in ¶84) less than a predetermined value (as per “a high value may be set for continuity to create smooth motions” in ¶85) among the plurality of average values as per “Ci is for continuity” and “continuity refers to the sharpness in change between the incoming and outgoing tangent vectors at each point” for evaluating each motion relative to adjacent motions in ¶84) and identifying one or more motions corresponding to the one or more average values (as per “Ci is for continuity” and “continuity refers to the sharpness in change between the incoming and outgoing tangent vectors at each point” for evaluating each motion relative to adjacent motions in ¶84) among the plurality of pre-stored motions (as per “motion template database 620”) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95); and
identifying one of the one or more motions randomly (as per random number 622) and obtaining the motion as the first motion (as per motion mapped to gesture identifier 352 for first sentence) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95).
As per Claim 13, Ng-Thow-Hing further discloses
obtaining a size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) based on information (as per “style parameters” in ¶84) on the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95);
based on a ratio (as per “preparation motion from a prior gesture motion or a starting pose to an initial position for the current gesture motion” in ¶92) of a size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the first motion (as per motion mapped to gesture identifier 352 for first sentence) to the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) being greater than a threshold value, adjusting (as per “the motion generator 230 may modify the gesture as defined by the gesture descriptor to make motions appear more natural” in ¶92) the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the first motion (as per motion mapped to gesture identifier 352 for first sentence) based on the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the first motion (as per motion mapped to gesture identifier 352 for first sentence) and the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95); and
based on the ratio (as per “preparation motion from a prior gesture motion or a starting pose to an initial position for the current gesture motion” in ¶92) of the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the first motion (as per motion mapped to gesture identifier 352 for first sentence) to the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the second motion (as per motion mapped to gesture identifier 352 for second sentence) being less or equal to the threshold value, maintaining (as per “The motion generator 230 plans 730 a gesture motion based on the analyzed timing of speech elements, the gesture descriptor and a previous gesture (if any)” in ¶92) the size as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the first motion (as per motion mapped to gesture identifier 352 for first sentence) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95).
As per Claim 14, Ng-Thow-Hing further discloses wherein the adjusting (as per “the motion generator 230 may modify the gesture as defined by the gesture descriptor to make motions appear more natural” in ¶92) of the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the first motion (as per motion mapped to gesture identifier 352 for first sentence) comprises:
calculating a weight (as per “The motion generator 230 plans 730 a gesture motion based on the analyzed timing of speech elements, the gesture descriptor and a previous gesture” in ¶92) based on the ratio (as per “preparation motion from a prior gesture motion or a starting pose to an initial position for the current gesture motion” in ¶92) of the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the first motion (as per motion mapped to gesture identifier 352 for first sentence) to the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95); and
adjusting (as per “the motion generator 230 may modify the gesture as defined by the gesture descriptor to make motions appear more natural” in ¶92) the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the first motion (as per motion mapped to gesture identifier 352 for first sentence) based on the weight (as per “The motion generator 230 plans 730 a gesture motion based on the analyzed timing of speech elements, the gesture descriptor and a previous gesture” in ¶92) so that the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the first motion (as per motion mapped to gesture identifier 352 for first sentence) is less than the size (as per “A is amplitude is the amplitude of the gesture trajectory” in ¶84) of the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95).
As per Claim 15, Ng-Thow-Hing further discloses at least one non-transitory computer-readable recording medium (330) on which a program for executing the method of claim 8 (see rejection of Claim 8) on a computer (140, 150) is recorded (Figs. 1-3A; ¶30, 39-43, 52, 56).
As per Claim 16, Ng-Thow-Hing further discloses controlling a first voice message output timing and a first motion performing timing (as per “The voice synthesizer 260 also provides an output 262 to the motion generator 230 to allow the motion generator 230 to check the progress of speech and make any adjustments to the speed of the gestures so that the speech and gestures can be synchronized” in ¶50) so that a time point when the first voice message starts to be output corresponds to a time point when the first motion starts to be performed (as per “the motion planner 610 may use the voice synthesizer output 262 to determine if the estimated timing of speech and actual timing of the speech generated by the voice synthesizer 260 match. If the timing does not match, the motion planner 610 delays or advances the motions according to the gesture descriptor 612” in ¶79).
As per Claim 18, Ng-Thow-Hing discloses one or more non-transitory computer-readable storage media (as per claim 20) storing one or more computer programs (as per “software, firmware, hardware or a combination thereof” in ¶42, as per “hardware, software, firmware or a combination thereof” in ¶43) including computer-executable instructions (as per “instructions” in ¶53, 56) that, when executed by one or more processors (310) of a robot (100) individually or collectively, cause the robot (100) to perform operations, the operations comprising:
detecting, by the robot (100), a user (as per “point to an object or person” in ¶49) (Fig. 2; ¶43, 48-49);
obtaining, by the robot (100), a plurality of sentences (as per 204/336) to be uttered (via speaker 266) by the robot (100) to the detected user (as per “point to an object or person” in ¶49) (Figs. 2, 3A-B; ¶43-45, 52, 55, 58-62);
identifying, by the robot (100), a first sentence (as per first sentence of original speech text 336 to be analyzed by grammar modules 340A-N) to which a motion to be performed (as per gesture identifier 352 to be selected by gesture selection 350 from gesture identifiers 342A-N) while the robot (100) utters (via speaker 266) is not allocated among the plurality of sentences (as per 204/336) (Figs. 2, 3A-B; ¶43-66);
identifying, by the robot (100), a second sentence (as per second sentence of original speech text 336 to be analyzed by grammar modules 340A-N) including a pre-allocated second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62), the second sentence (as per second sentence of original speech text 336 to be analyzed by grammar modules 340A-N) to be uttered (as per speaker 266) in a second time section (as per second sentence adjacent first sentence in “analyzed timing of speech elements” in ¶92) adjacent to a first time section (as per first sentence adjacent second sentence in “analyzed timing of speech elements” in ¶92) in which the first sentence (as per first sentence of original speech text 336 to be analyzed by grammar modules 340A-N) is to be uttered (via speaker 266), the pre-allocated second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) being pre-allocated when identifying the second sentence (as per second sentence of original speech text 336 to be analyzed by grammar modules 340A-N) (Figs. 2, 3A-B, 7A-B; ¶43-66, 87-92);
identifying one or more motions (as per motion mapped to gesture identifier 352) having a similarity (as per “repetitive motion” in ¶82, as per “F is the frequency of a gesture element … that is repetitive” in ¶84) to the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) less than a predetermined value (as per detection of “repetitive motion” in ¶82, 84) among a plurality of pre-stored motions (as per “motion template database 620”) in a memory (330) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95);
obtaining, by the robot (100), a first motion (as per motion mapped to gesture identifier 352 for first sentence) different from the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) pre-allocated to the second sentence (as per second sentence of original speech text 336 to be analyzed by grammar modules 340A-N) among the identified one or more motions (as per motion mapped to gesture identifier 352 for first sentence) as a motion corresponding to the first sentence (as per first sentence of original speech text 336 to be analyzed by grammar modules 340A-N) (Figs. 2, 3A-B, 6, 7A-B; ¶43-66, 74-95); and
performing, by the robot (100), the first motion (as per motion mapped to gesture identifier 352 for first sentence) while uttering (via speaker 266) the first sentence (as per first sentence of original speech text 336 to be analyzed by grammar modules 340A-N) (Figs. 2, 3A-B; ¶43-66).
As per Claim 19 Ng-Thow-Hing further discloses wherein the first motion (as per motion mapped to gesture identifier 352 for first sentence) is based (as per “preparation motion from a prior gesture motion or a starting pose to an initial position for the current gesture motion” in ¶92) upon the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62).
As per Claim 20, Ng-Thow-Hing further discloses wherein the obtaining of the first motion (as per motion mapped to gesture identifier 352 for first sentence) different from the second motion (as per “if the speech text 336 indicates ‘Hello’ or ‘Bye,’, the grammar module (e.g., 340A) generates and outputs a gesture identifier (e.g., 342A) that an emblem is applicable or active and that the corresponding gesture should be a gesture of waving hands to a target person” in ¶62) pre-allocated to the second sentence (as per second sentence of original speech text 336 to be analyzed by grammar modules 340A-N) among the identified one or more motions (as per motion mapped to gesture identifier 352 for first sentence) among the plurality of motions pre-stored (as per “motion template database 620”) in the memory (330) is based on a time (as per timing information 344) to utter (via speaker 266) the first sentence (as per first sentence of original speech text 336 to be analyzed by grammar modules 340A-N) being greater than a predetermined time (as per “The motion planner 610 may modify the gesture as defined by the gesture template based on the timing information 344 to ensure that the gesture takes place in synchrony with the speech. For this purpose, the motion planner 610 may use the voice synthesizer output 262 to determine if the estimated timing of speech and the actual timing of the speech generated by the voice synthesizer 260 match. If the timing does not match, the motion planner 610 delays or advances the motions according to the gesture descriptor 612” in ¶79).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Ng-Thow-Hing (US Pub. No. 2012/0191460) in view of Fujita (US Pub. No. 2006/0195598).
As per Claim 17, Ng-Thow-Hing discloses all limitations of Claim 16. Ng-Thow-Hing does not expressly disclose obtaining a predetermined sentence when a trigger event is detected, wherein the trigger event includes an event in which the user is recognized within a predetermined distance from the robot.
Fujita discloses a robot (2) that interacts with a user (1) (Fig. 1; ¶56). The robot (2) communicates with a server (100) that supervises the contents provided to the user (1) via the robot (2) and provides the robot (2) with data and/or programs (Fig. 1; ¶57-58). The robot (2) includes a camera (15) for recognizing stimuli and acquiring data for responding to requests from the user (1) (Fig. 2; ¶63-64, 75, 186). In one embodiment, the server (100) provides the robot (2) with a message action module that imparts a message in accordance with a proper sequence when the user (1) is at a distance from the robot (2) with which dialog is possible (Figs. 13-14; ¶190, 194). In this way, a message (as per message A or message B) appropriate to the user’s response is generated (¶180, 194). Like Ng-Thow-Hing, Fujita is concerned with robot control systems.
Therefore, from these teachings of Ng-Thow-Hing and Fujita, one of ordinary skill in the art before the effective filing date would have found it obvious to apply the teachings of Fujita to the system of Ng-Thow-Hing since doing so would enhance the system by generating a message appropriate to the user’s response. Applying the teachings of Fujita to the system of Ng-Thow-Hing would result in a system that operates by “obtaining a predetermined sentence when a trigger event is detected, wherein the trigger event includes an event in which the user is recognized within a predetermined distance from the robot” in that the system of Ng-Thow-Hing would be informed by provide a proper sequence in response to detecting a user within a specified distance as per Fujita.
Response to Arguments
Applicant's arguments filed 30 December 2025 have been fully considered as follows.
Applicant argues that rejections under 35 USC 102 should not be maintained because “Ng-Thow-Hing fails to disclose or render obvious ‘identify one or more motions having a similarity to the second motion less than a predetermined value among a plurality of pre-stored motions in the memory, [and] obtain a first motion different from the second motion pre-allocated to the second sentence the identified one or more motions as a motion corresponding to the first sentence’” (page 10 of Amendment). However, as set forth above, Ng-Thaw-Hing discloses all limitations in the claim language at issue. As such, Applicant’s argument involves an improper interpretation of the claim language and/or an improper interpretation of the cited reference. Therefore, Applicant’s argument does not identify a proper basis for finding that any rejection is improper.
Applicant argues that rejections under 35 USC 102 should not be maintained because “Ng-Thaw-Hing paragraph [0062] merely discloses a gesture identification operation, and does not disclose a configuration for determining similarity as in the present claims” (page 13 of Amendment). However, no rejection involves an assertion that Ng-Thaw-Hing at paragraph [0062] discloses all limitations in the claim language at issue. As such, Applicant’s argument is not relevant to the rejection of any claim. Therefore, Applicant’s argument does not identify a proper basis for finding that any rejection is improper.
Applicant argues that rejections under 35 USC 102 should not be maintained because “Ng-Thaw-Hing paragraph [0082] merely mentions blending for preventing repetition, and does not disclose a configuration for determining similarity as in the present claims” (page 13 of Amendment). However, no rejection involves an assertion that Ng-Thaw-Hing at paragraph [0082] discloses all limitations in the claim language at issue. As such, Applicant’s argument is not relevant to the rejection of any claim. Therefore, Applicant’s argument does not identify a proper basis for finding that any rejection is improper.
Applicant argues that rejections under 35 USC 102 should not be maintained because “Ng-Thaw-Hing paragraph [0083] merely mentions trajectory modification and/or speed modification for preventing repetition, and does not disclose a configuration for identifying, among previously stored motions, a motion having a low degree of similarity as in the present claims” (page 13 of Amendment). However, no rejection involves an assertion that Ng-Thaw-Hing at paragraph [0083] discloses all limitations in the claim language at issue. As such, Applicant’s argument is not relevant to the rejection of any claim. Therefore, Applicant’s argument does not identify a proper basis for finding that any rejection is improper.
Applicant argues that rejections under 35 USC 102 should not be maintained because “Ng-Thaw-Hing paragraph [0083] is closer to partially modifying an existing motion, and is therefore considered to be different from the present claims” (page 13 of Amendment). However, no rejection involves an assertion that Ng-Thaw-Hing at paragraph [0083] discloses all limitations in the claim language at issue. As such, Applicant’s argument is not relevant to the rejection of any claim. Therefore, Applicant’s argument does not identify a proper basis for finding that any rejection is improper.
Applicant argues that rejections under 35 USC 102 should not be maintained because “Ng-Thaw-Hing paragraph [0084] merely defines style parameters in detail, and does not disclose a configuration for determining similarity between motions as in the present claims” (page 13 of Amendment). However, no rejection involves an assertion that Ng-Thaw-Hing at paragraph [0084] discloses all limitations in the claim language at issue. As such, Applicant’s argument is not relevant to the rejection of any claim. Therefore, Applicant’s argument does not identify a proper basis for finding that any rejection is improper.
Applicant argues that rejections under 35 USC 103 should not be maintained because “Fujita adds nothing to cure the deficiencies of Ng-Thaw-Hing as applied against independent claim 8” (page 14 of Amendment). However, as discussed above, the alleged deficiencies are not present in any rejection. Accordingly, Applicant’s argument is moot.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Sumida (US Pub. No. 2009/0149991), Meier (US Pub. No. 2015/0217449), Lee (US Pub. No. 2018/0178372), Breazeal (US Pub. No. 2018/0229372), and Gewickey (US Pub. No. 2019/0366557) disclose robot control systems.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEPHEN HOLWERDA whose telephone number is (571)270-5747. The examiner can normally be reached M-F 8am - 4:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KHOI TRAN can be reached at (571) 272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STEPHEN HOLWERDA/Primary Examiner, Art Unit 3656