Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/22/2026 has been entered.
Response to Amendment
This communication is in response to the application filed on 01/22/2026.
The claims 1, 11-12, and 19 are currently amended. Claims 1-20 are pending.
Response to Arguments
The Examiner would like to point out that the claims as filed on 01/22/2026 make the claim set (particularly the independent claims) broader rather than narrowing as discussed in interview, the claims as filed also fail to clearly claim that the second information is obtained specifically without using markers and as discussed in interview simply deleting the mention of markers in the second information limitation fails to claim marker less motion capture. Therefore, the claim as stands only specifies the second information is obtained from the video which it is in ZOHAR, and the claims could be rejected after being written more broadly by ZOHAR in view of ADACHI. However, the examiner has decided in order to more quickly advance prosecution since based on the interview conducted 12/18/2025 the examiner knows the applicant meant to claim obtaining the second joint information using marker less methods based only on the video provided and the claims will be examined as if they were written to mean as such.
Applicant’s arguments filed on 01/22/2026 on pages 11-13, under REMARKS with respect to 35 U.S.C. 103 claim rejections to claims 1-20 have been fully considered. The rejections to the claims have been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of US 2004/0119716 A1.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
Claims 1-5, 7-9, 12-15, and 17-20 are rejected under 35 § U.S.C. 103 as being obvious over US 2008/0221487 A1 to ZOHAR et al (hereinafter “ZOHAR”) in view of US 10,510,159 B2 to ADACHI et al (hereinafter “ADACHI”) in further view of US 2004/0119716 A1 to PARK et al (hereinafter “PARK”).
As per claim 1, ZOHAR discloses an electronic device comprising: memory storing instructions (a computing system comprising a memory to store instructions, images and various programs relating to tracking a subjects motion path; abstract; fig 3; paragraph [0053]); and at least one processor (the system comprising a processor component; abstract; fig 3; paragraph [0053]); wherein when the instructions are executed, the at least one processor is configured to (the processor component is adapted to execute the instructions stored in said memory; abstract; fig 3; paragraph [0053]): identify a video capturing a body and one or more markers attached to the body (causing the system to capture image/video and obtain 3D position information of a person wearing 3D position sensor markers 20 wherein the system finds the 3D coordinate position of 300 optical markers attached to said person; figs 1A-1D, and 2; paragraphs [0053-0054]), based on the one or more markers being identified, obtain first information including an angle and a position of at least one first joint corresponding to the one or more markers among joints of the body (the system is adapted to obtain information including that of joint position in relation to the markers wherein markers were positioned such that kinematics skeleton 24 shows joint positions of the person and further the system estimates the joint angles of all joints using kinematics solver module 6; paragraphs [0050], [0064]); and obtain third information indicating a posture of the body based on interconnection of the at least one first joint and the at least one second joint in the video, based on the first information and the second information (kinematic solver 6 using the marker template and a kinematic skeleton the solver is adapted to output real time poses of the kinematic skeleton based on the position/movement of the various sensor around the various joints of the body wherein markers are stated to be disposed about every joint of the body in order to provide a full body pose estimation seen in figures 1A-E and is adapted to provide visualizations of muscle forces for any given exercise in real-time; figs 1-3, and 5; paragraphs [0055-0056], [0064-0068]). ZOHAR fails to disclose the one or more markers including a two-dimensional pattern; based on pattern recognition for the two-dimensional pattern performed using the video, identify the one or more markers in the video; obtain second information including a position of at least one second joint among joints of the body, based on the video in which the body is captured.
ADACHI discloses the one or more markers including a two-dimensional pattern (each marker has an associated marker ID number and an associated marker ID pattern as seen in figure 3’s table; fig 3; column 5, lines 1-20); based on pattern recognition for the two-dimensional pattern performed using the video, identify the one or more markers in the video (each marker ID is recognized by transforming the image coordinates of the corner points into an upright rectangular image by nomography transformation, scanning the interior of the image, and performing association with an ID by known pattern matching; fig 3; column 5, lines 1-20).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify ZOHAR to have using 2D pattern recognition to recognize markers having patterns placed on them to differentiate different marker groups related to different joints of motion of the subject of ADACHI reference. The Suggestion/motivation for doing so would have been to provide the ability to identify joints based on the patterns and IDs associated with the markers for example the square markers would represent markers around the knee joint or circle markers would represent motion of the elbow joint as suggested by ADACHI at column 5, lines 5-20. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine ADACHI with ZOHAR to obtain the invention as specified in claim 1.
PARK discloses obtain second information including a position of at least one second joint among joints of the body, based on the video in which the body is captured (a variety of joint positions are acquired in a marker free manner, the joints positions are acquired via a computing system comprising a camera which first acquires feature points 13 and uses feature points 13 in order to estimate the coordinates of the middle joints using the 3-dimensional motion restoration module 30 and applying the 3-dimensional position information of the feature points 13 to the inverse kinematics known method of the art; title; abstract; fig 1; paragraphs [0012-0014], [0023-0025], [0034-0037], [0055-0061]; NOTE the claim does not require marker free joint location but based on the interview conducted the examiner knows what the applicant intended to mean and has included this rejection to compact further prosecution).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify ZOHAR to obtain second information including a position of at least one second joint among joints of the body, based on the video in which the body is captured of PARK reference. The Suggestion/motivation for doing so would have been to provide the ability to find the first joint using a marker of ZOHAR and to use a marker with a two-dimensional pattern of ADACHI and to finally estimate the remaining joints using PARK and providing marker free motion capture methods in order to estimate second information of a plurality of joint locations, positions, and rotation properties without the use of a marker to create a motion model of the user as suggested by paragraphs [0055-0060] of PARK. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine PARK with ZOHAR to obtain the invention as specified in claim 1.
As per claim 2, ZOHAR in view of ADACHI in view of PARK discloses the electronic device of claim 1. Modified ZOHAR further discloses wherein when the instructions are executed, the at least one processor is further configured to: identify a first region in the video including the one or more markers and a body part corresponding to the at least one first joint (as seen in fig 2 markers 20 are matched to kinematics skeleton 20 acting as a marker template and interconnects the markers 20 into a kinematic skeleton comprising regions of association for each joint of the body these associations are made by kinematics solver 6 for example looking at figure 1B the markers 20 near the left knee joint of the image 1B would be associated with the left knee as the first joint and first region wherein the 3D coordinates of each marker are recorded in relation to the left knee joint to create the kinematic skeleton representation; figs 1-2; paragraphs [0049-0050], [0054], [0061-0064]), and a second region in the video distinguished by the body, based on a first model receiving the video, obtain the first information including the angle and the position of at least one first joint, based on a plurality of coordinates indicating distinct points of the one or more markers in the first region (the system is adapted to obtain information including that of joint position in relation to the markers wherein markers were positioned such that kinematics skeleton 24 shows joint positions of the person and further the system estimates the joint angles of all joints using kinematics solver module 6 and is adapted to using the solver to show 3D data coordinates within the space of 300 optical markers being worn by the subject which would show the position and angle of many joints of the body including a single joint; paragraphs [0050], [0054, [0064]); and obtain the second information including the position of the at least one second joint, based on identifying the second region (therefore looking at figure 1B again it can be seen that markers 20 in a region of the body near a right knee joint would be associated with the right knee joint depicted in the figure 1B wherein the 3D coordinates of each marker are recorded in relation to the right knee joint to create the kinematic skeleton representation which includes joint angle for every joint of every body part and records the motion of that body part during activity/exercise; figs 1-2; paragraphs [0049-0050], [0054], [0061-0064], [0068]).
As per claim 3, ZOHAR in view of ADACHI in view of PARK discloses the electronic device of claim 2. Modified ZOHAR fails to disclose wherein the plurality of coordinates respectively correspond to each of the corners of a square marker of the one or more markers, and wherein when the instructions are executed, the at least one processor is further configured to: obtain the first information, based on a center of the corners being identified by the plurality of coordinates.
ADACHI discloses wherein the plurality of coordinates respectively correspond to each of the corners of a square marker of the one or more markers, and wherein when the instructions are executed, the at least one processor is further configured to: obtain the first information, based on a center of the corners being identified by the plurality of coordinates (as seen in fig 4-5 the corner positions of a square marker indicating a region of interest are detected and at step s3009 marker selection unit 1101 calculates the distribution of the square markers and a number of marker center coordinate points included in each of 500-mm square grids obtained by dividing the X-Z plane is counted; figs 4-5; column 4, lines 9-49).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify ZOHAR to have the plurality of coordinates respectively corresponds to each of the corners of a square marker of one or more markers of ADACHI reference. The Suggestion/motivation for doing so would have been to provide the ability to identify the marker closest to the origin as suggested by ADACHI at column 12, lines 5-20. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine ADACHI with modified ZOHAR to obtain the invention as specified in claim 3.
As per claim 4, ZOHAR in view of ADACHI in view of PARK discloses the electronic device of claim 2. Modified ZOHAR further discloses wherein the first information includes a first coordinate for indicating the position of the at least one first joint, the first coordinate defined in the first region (as disclosed the system is capable of converting and displaying 3D data coordinates of up to 300 optical markers 20 are associated with a respective join of the body by using kinematic solver module 6 of the motion capture system 1 to generate muscle and joint paths for all respective muscles and joints and wherein the first join would be associated with the left knee as described above; paragraphs [0044], [0054-0055]), and wherein when the instructions are executed, the at least one processor is further configured to: transform the first coordinate to a second coordinate defined in the second region, based on obtaining the first information (each joint of the body is modeled to include each joint and would include as described above a second joint and second region relating to associated markers about the right knee wherein in real time motion capture system 32 and the custom computational pipeline 36 translating the capture data to muscle forces and joint torques to display to users; figs 1-2; paragraphs [0049-0050], [0054], [0061-0064], [0070]); and obtain the third information indicating the posture of the body, based on the second coordinate and the second information (finally obtain and displaying visualization of muscle forces relating to the motion capture markers of any given exercise or pose in real time based on the position of the joint and the marker disposed about said joint; figs 1-2; paragraphs [0029], [0049-0050], [0054], [0061-0064], [0068]).
As per claim 5, ZOHAR in view of ADACHI in view of PARK discloses the electronic device of claim 2. Modified ZOHAR further discloses wherein when the instructions are executed, the at least one processor is further configured to: obtain the angle of the at least one first joint based on a direction of the one or more markers being indicated by the plurality of coordinates (the data analysis pipeline illustrated in FIG. 2, taking the data stream from the motion capture system and calculating the joint angles for every body part, each joint calculated is drawn as a sphere in this drawing; fig 2; paragraphs [0005], [0032]).
As per claim 7, ZOHAR in view of ADACHI in view of PARK discloses the electronic device of claim 1. Modified ZOHAR further discloses wherein when the instructions are executed, the at least one processor is further configured to: obtain the third information by combining the position of the at least one second joint, among a position of the at least one first joint being included in the second information and the position of the at least one second joint, and the position of the at least one first joint being included in the first information (the exercise motion/ pose information including joint angles and position coordinates of every joint of the body includes motion, force, and muscle/kinematic information about the respective joint and all joints are included in the reconstruction kinematic skeleton which includes all of the joints which includes a first and second joint; figs 1-2; paragraphs [0049-0050], [0054], [0061-0064], [0068]).
As per claim 8, ZOHAR in view of ADACHI in view of PARK discloses the electronic device of claim 1. Modified ZOHAR further discloses wherein the memory pre-stores another video capturing each of the one or more markers before the video is captured (a video is used to generate a template 22 is processed for an initial or balance position acting as the input pre stored reference video/positioning of the markers 20; fig 2; paragraph [0050]), and wherein when the instructions are executed, the at least one processor is further configured to: obtain the first information based on comparing each of the one or more markers being captured in the video and the one or more markers being captured in the other video and pre-stored in the memory (wherein the computing system is adapted to utilize a template matching algorithm to interpolate for missing or bad marker data to match the skeleton generated from the markers to the template, wherein the template matching result is passed to the computational inverse kinematics skeleton 24, where position data of the markers is plotted in real time to joint orientations in the computational skeleton 24 based on the pre captured template; figs 1A-E, 2; paragraphs [0049-0050]).
As per claim 9, ZOHAR in view of ADACHI in view of PARK discloses the electronic device of claim 1. Modified ZOHAR further discloses wherein the one or more markers respectively attached to a body part corresponding to the position of the at least one first joint (as seen in figures 1-2 the markers 20 are disposed all around the subject body including around both the right and left knee acting as the first and second joints in this discloser however it is again noted all joints are included in the motion capture reconstruction; figs 1A-E, 2; paragraphs [0048-0050], [0054-0055], [0061]).
As per claim 12, ZOHAR discloses an operating method of an electronic device comprising (a method of using a computing system comprising a memory to store instructions, images and various programs relating to tracking a subjects motion path; abstract; fig 3; paragraph [0053]): identifying a video capturing a body and one or more markers attached to the body (causing the system to capture image/video and obtain 3D position information of a person wearing 3D position sensor markers 20 wherein the system finds the 3D coordinate position of 300 optical markers attached to said person; figs 1A-1D, and 2; paragraphs [0053-0054]), based on the one or more markers being identified, obtaining first information including an angle and a position of at least one first joint corresponding to the one or more markers among joints of the body; (the system is adapted to obtain information including that of joint position in relation to the markers wherein markers were positioned such that kinematics skeleton 24 shows joint positions of the person and further the system estimates the joint angles of all joints using kinematics solver module 6; paragraphs [0050], [0064]), and obtaining a third information indicating a posture of the body based on the interconnection of the at least one first joint and the at least one second joint in the video, based on the first information and the second information (kinematic solver 6 using the marker template and a kinematic skeleton the solver is adapted to output real time poses of the kinematic skeleton based on the position/movement of the various sensor around the various joints of the body wherein markers are stated to be disposed about every joint of the body in order to provide a full body pose estimation seen in figures 1A-E and is adapted to provide visualizations of muscle forces for any given exercise in real-time; figs 1-3, and 5; paragraphs [0055-0056], [0064-0068]). Modified ZOHAR fails to disclose the one or more markers including a two-dimensional pattern; based on pattern recognition s for the two-dimensional pattern performed using the video, identifying the one or more markers in the video; obtaining second information including a position of at least one second joint among joints of the body, based on the video in which the body is captured.
ADACHI discloses the one or more markers including a two dimensional pattern (each marker has an associated marker ID number and an associated marker ID pattern as seen in figure 3’s table; fig 3; column 5, lines 1-20); based on pattern recognition s for the two dimensional pattern performed using the video, identifying the one or more markers in the video (each marker ID is recognized by transforming the image coordinates of the corner points into an upright rectangular image by nomography transformation, scanning the interior of the image, and performing association with an ID by known pattern matching; fig 3; column 5, lines 1-20).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify ZOHAR to have using 2D pattern recognition to recognize markers having patterns placed on them to differentiate different marker groups related to different joints of motion of the subject of ADACHI reference. The Suggestion/motivation for doing so would have been to provide the ability to identify joints based on the patterns and IDs associated with the markers for example the square markers would represent markers around the knee joint or circle markers would represent motion of the elbow joint as suggested by ADACHI at column 5, lines 5-20. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine ADACHI with ZOHAR to obtain the invention as specified in claim 12.
PARK discloses obtain second information including a position of at least one second joint among joints of the body, based on the video in which the body is captured (a variety of joint positions are acquired in a marker free manner, the joints positions are acquired via a computing system comprising a camera which first acquires feature points 13 and uses feature points 13 in order to estimate the coordinates of the middle joints using the 3-dimensional motion restoration module 30 and applying the 3-dimensional position information of the feature points 13 to the inverse kinematics known method of the art; title; abstract; fig 1; paragraphs [0012-0014], [0023-0025], [0034-0037], [0055-0061]; NOTE the claim does not require marker free joint location but based on the interview conducted the examiner knows what the applicant intended to mean and has included this rejection to compact further prosecution).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify ZOHAR to obtain second information including a position of at least one second joint among joints of the body, based on the video in which the body is captured of PARK reference. The Suggestion/motivation for doing so would have been to provide the ability to find the first joint using a marker of ZOHAR and to use a marker with a two-dimensional pattern of ADACHI and to finally estimate the remaining joints using PARK and providing marker free motion capture methods in order to estimate second information of a plurality of joint locations, positions, and rotation properties without the use of a marker to create a motion model of the user. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine PARK with ZOHAR to obtain the invention as specified in claim 12.
As per claim 13, ZOHAR in view of ADACHI in view of PARK discloses the method of claim 12. Modified ZOHAR further discloses wherein the identifying the video includes identifying a first region in the video including the one or more markers and a body part corresponding to the at least one first joint (as seen in fig 2 markers 20 are matched to kinematics skeleton 20 acting as a marker template and interconnects the markers 20 into a kinematic skeleton comprising regions of association for each joint of the body these associations are made by kinematics solver 6 for example looking at figure 1B the markers 20 near the left knee joint of the image 1B would be associated with the left knee as the first joint and first region wherein the 3D coordinates of each marker are recorded in relation to the left knee joint to create the kinematic skeleton representation; figs 1-2; paragraphs [0049-0050], [0054], [0061-0064]), and a second region in the video distinguished by the body, based on a first model receiving the video, wherein the obtaining the first information includes obtaining the first information including the angle and the position of the at least one first joint, based on a plurality of coordinates indicating distinct points of the one or more markers in the first region (the system is adapted to obtain information including that of joint position in relation to the markers wherein markers were positioned such that kinematics skeleton 24 shows joint positions of the person and further the system estimates the joint angles of all joints using kinematics solver module 6 and is adapted to using the solver to show 3D data coordinates within the space of 300 optical markers being worn by the subject which would show the position and angle of many joints of the body including a single joint; paragraphs [0050], [0054, [0064]), and wherein the obtaining the second information includes obtaining the second information including the position of the at least one second joint based on identifying the second region (therefore looking at figure 1B again it can be seen that markers 20 in a region of the body near a right knee joint would be associated with the right knee joint depicted in the figure 1B wherein the 3D coordinates of each marker are recorded in relation to the right knee joint to create the kinematic skeleton representation which includes joint angles for every body part; figs 1-2; paragraphs [0049-0050], [0054], [0061-0064]).
As per claim 14, ZOHAR in view of ADACHI in view of PARK discloses the method of claim 13, wherein the first information includes a first coordinate for indicating the position of at least one first joint, the first coordinate defined in the first region (as disclosed the system is capable of converting and displaying 3D data coordinates of up to 300 optical markers 20 are associated with a respective join of the body by using kinematic solver module 6 of the motion capture system 1 to generate muscle and joint paths for all respective muscles and joints and wherein the first join would be associated with the left knee as described above; paragraphs [0044], [0054-0055]), and wherein the obtaining the third information including: transforming the first coordinate to a second coordinate defined in the second region (each joint of the body is modeled to include each joint and would include as described above a second joint and second region relating to associated markers about the right knee wherein in real time motion capture system 32 and the custom computational pipeline 36 translating the capture data to muscle forces and joint torques to display to users; figs 1-2; paragraphs [0049-0050], [0054], [0061-0064], [0070]), based on obtaining the first information, and obtaining the third information indicating the posture of the body, based on the second coordinate and the second information (finally obtain and displaying visualization of muscle forces relating to the motion capture markers of any given exercise or pose in real time based on the position of the joint and the marker disposed about said joint; figs 1-2; paragraphs [0029], [0049-0050], [0054], [0061-0064], [0068]).
As per claim 15, ZOHAR in view of ADACHI in view of PARK discloses the method of claim 13. Modified ZOHAR further discloses wherein the obtaining the first information further includes obtaining the angle of the at least one first joint based on a direction of the one or more markers being indicated by the plurality of coordinates (the data analysis pipeline illustrated in FIG. 2, taking the data stream from the motion capture system and calculating the joint angles for every body part, each joint calculated is drawn as a sphere in fig 1B and 2; fig 2; paragraphs [0005], [0032]).
As per claim 17, ZOHAR in view of ADACHI in view of PARK discloses the method of claim 12. Modified ZOHAR further discloses wherein the obtaining the third information includes obtaining the third information by combining the position of the at least one second joint, among a position of the at least one first joint being included in the second information and the position of the at least one second joint, and the position of the at least one first joint being included in the first information (the exercise motion/ pose information including joint angles and position coordinates of every joint of the body includes motion, force, and muscle/kinematic information about the respective joint and all joints are included in the reconstruction kinematic skeleton which includes all of the joints which includes a first and second joint; figs 1-2; paragraphs [0049-0050], [0054], [0061-0064], [0068]).
As per claim 18, ZOHAR in view of ADACHI in view of PARK discloses the method of claim 12. Modified ZOHAR further discloses wherein the one or more markers respectively attached to a body part corresponding to the position of the at least one first joint (as seen in figures 1-2 the markers 20 are disposed all around the subject body including around both the right and left knee acting as the first and second joints in this discloser however it is again noted all joints are included in the motion capture reconstruction; figs 1A-E, 2; paragraphs [0048-0050], [0053-0055], [0061]).
As per claim 19, ZOHAR discloses a computer readable storage medium storing one or more programs (a computing system comprising a memory to store instructions, images and various programs relating to tracking a subjects motion path; abstract; fig 3; paragraph [0053]), when executed by at least one processor of an electronic device (the system comprising a processor component the processor component is adapted to execute the instructions stored in said memory; abstract; fig 3; paragraph [0053]), the one or more programs cause electronic device to: based on the marker being identified obtain a first region capturing the marker from the video (causing the system to capture image/video and obtain 3D position information of a person wearing 3D position sensor markers 20 wherein the system finds the 3D coordinate position of 300 optical markers attached to said person and disposed about every body part and every joint of the body wherein each dot in figure 1B represents a different joint/ body part region of association and the plurality of regions includes a first region which can be selected to be any available region surrounding a joint represented by markers; figs 1A-E, 2; paragraphs [0048-0050], [0053-0055], [0061]); obtain a second region capturing the body and at least partially overlapped to the first region (causing the system to capture image/video and obtain 3D position information of a person wearing 3D position sensor markers 20 wherein the system finds the 3D coordinate position of 300 optical markers attached to said person and disposed about every body part and every joint of the body wherein each dot in figure 1B represents a different joint/ body part region of association; figs 1A-E, 2; paragraphs [0048-0050], [0053-0055], [0061]), from the video, identify a position of a designated body part among a plurality of body parts of body, based on the marker of the first region (the system is adapted to obtain information including that of joint position in relation to the markers wherein markers were positioned such that kinematics skeleton 24 shows joint positions of the person and further the system estimates the joint angles of all joints using kinematics solver module 6 and is adapted to using the solver to show 3D data coordinates within the space of 300 optical markers being worn by the subject which would show the position and angle of many joints of the body including a single joint and the joint torques and force vectors may be identified down to a particular joint (designated body part); paragraphs [0050], [0054], [0058], [0064]); identify a position of the plurality of body parts included in the body in the second region, and from the second region (based on the 300 markers included in the video of the person 30 wherein markers are placed using marker templates and marker templates define markers to be placed at every joint of the body which includes at least a second joint as seen in fig 1A-E wherein each joint is denoted with a ball/circle; fig 1A-1E, and 2; paragraphs [0049-0050], [0061], [0064]); obtain information indicating a posture of the body, based on the position of the plurality of body parts being identified in the second region and the position of the designated body part being identified based on the marker (kinematic solver 6 using the marker template and a kinematic skeleton the solver is adapted to output real time poses of the kinematic skeleton based on the position/movement of the various sensor around the various joints of the body wherein markers are stated to be disposed about every joint of the body in order to provide a full body pose estimation seen in figures 1A-E and is adapted to provide visualizations of muscle forces for any given exercise in real-time; figs 1-3, and 5; paragraphs [0055-0056], [0064-0068]). ZOHAR fails to disclose identify a video capturing a body and a marker attached to the body, the marker including a two dimensional pattern; based on pattern recognition for the two dimensional pattern performed using the video, identify the marker in the video; from the second region based on identifying a position of at least one joint among a plurality of joints of the body, wherein the position of the at least one joint is identified based on the second region captured in the video.
ADACHI discloses identify a video capturing a body and a marker attached to the body, the marker including a two dimensional pattern (each marker has an associated marker ID number and an associated marker ID pattern as seen in figure 3’s table; fig 3; column 5, lines 1-20); based on pattern recognition for the two dimensional pattern performed using the video, identify the marker in the video (each marker has an associated marker ID number and an associated marker ID pattern as seen in figure 3’s table; fig 3; column 5, lines 1-20).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify ZOHAR to have using 2D pattern recognition to recognize markers having patterns placed on them to differentiate different marker groups related to different joints of motion of the subject of ADACHI reference. The Suggestion/motivation for doing so would have been to provide the ability to identify joints based on the patterns and IDs associated with the markers for example the square markers would represent markers around the knee joint or circle markers would represent motion of the elbow joint as suggested by ADACHI at column 5, lines 5-20. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine ADACHI with ZOHAR to obtain the invention as specified in claim 19.
PARK discloses from the second region based on identifying a position of at least one joint among a plurality of joints of the body (from the marker less motion capture system the motion model of positions of a plurality of joints are tracked as seen in figure 1 and 3 using feature points which are used to determine joint position; figs 1-3; paragraphs [0012-0014], [0023-0025], [0034-0037], [0051], [0055-0061]; NOTE the claim does not require marker free joint location but based on the interview conducted the examiner knows what the applicant intended to mean and has included this rejection to compact further prosecution), wherein the position of the at least one joint is identified based on the second region captured in the video (the position of the joints is based on feature points 13 which are captured within feature point area 23 acting as a region and joints are identified based on feature points found in the feature point area; figs 1-3; paragraphs [0012-0014], [0023-0025], [0034-0037], [0051], [0055-0061]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify ZOHAR to have position of the at least one joint is identified based on the second region captured in the video of PARK reference. The Suggestion/motivation for doing so would have been to provide the ability to find the first joint using a marker of ZOHAR and to use a marker with a two-dimensional pattern of ADACHI and to finally estimate the remaining joints using PARK and providing marker free motion capture methods in order to estimate second information of a plurality of joint locations, positions, and rotation properties without the use of a marker to create a motion model of the user. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine PARK with ZOHAR to obtain the invention as specified in claim 19.
As per claim 20, ZOHAR in view of ADACHI discloses the computer readable storage medium of claim 19. Modified ZOHAR further discloses wherein the marker is a first marker, wherein the designated body part is a first designated body part, and wherein when executed by the at least one processor, the one or more programs further cause the electronic device to: obtain a third region capturing a second marker distinct from the first marker from the video (a third region associated with a different joint of the body wherein every joint is reconstructed kinematically of the imaged body using a second of three hundred motion capture markers 20 disposed about the body of the subject wherein the second marker selected is different than the first selected and is in a region associated with a third body part/joint; paragraphs [0048-0050], [0054-0056], [0064], [0068]); and identify a position of a second designated body part corresponding to the first designated body part, based on the second marker included in the third region (therefore looking at figure 1B again it can be seen that markers 20 in a region of the body near a right knee joint would be associated with the right knee joint depicted in the figure 1B wherein the 3D coordinates of each marker are recorded in relation to the right knee joint to create the kinematic skeleton representation which includes joint angle for every joint of every body part and records the motion of that body part during activity/exercise which would move the marker through a plurality of regions including a third region; figs 1-2; paragraphs [0049-0050], [0054], [0061-0064], [0068]).
Claims 6, and 16 are rejected under 35 § U.S.C. 103 as being obvious over US 2008/0221487 A1 to ZOHAR et al (hereinafter “ZOHAR”) in view of US 10,510,159 B2 to ADACHI et al (hereinafter “ADACHI”) in further view of US 2004/0119716 A1 to PARK et al (hereinafter “PARK”) in further view of US 2021/0358197 A1 to SHYSHEYA et al (hereinafter “SHYSHEYA”).
As per claim 6, ZOHAR in view of ADACHI discloses the electronic device of claim 2. Modified ZOHAR fails to disclose wherein when the instructions are executed, the at least one processor is further configured to: through a second model receiving the second region, obtain the second information indicating a possibility that at least one of the at least first joint and that at least one second joint exists in a virtual two-dimensional space.
SHYSHEYA discloses wherein when the instructions are executed, the at least one processor is further configured to: through a second model receiving the second region, obtain the second information indicating a possibility that at least one of the at least first joint and that at least one second joint exists in a virtual two-dimensional space (the 3D marker position data is obtained and input into a second texture mapping model in order to map the first and second joints of the plurality of joints into a two dimensional space as depicted in figures 1-2; abstract; figs 1-4; paragraphs [0040-0042]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify ZOHAR to have at least one of the first joint and at least one of the second joints exists in a virtual two-dimensional space of SHYSHEYA reference. The Suggestion/motivation for doing so would have been to provide the ability to synthesize free viewpoint full-body videos of humans as suggested by SHYSHEYA at paragraph [0039]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine SHYSHEYA with modified ZOHAR to obtain the invention as specified in claim 6.
As per claim 16, ZOHAR in view of ADACHI in view of PARK discloses the method of claim 13. Modified ZOHAR fails to disclose wherein the obtaining the second information includes obtaining the second information indicating a possibility that at least one of the at least one first joint and the at least one second joint exists in a virtual two-dimensional space, through a second model receiving the second region.
SHYSHEYA discloses wherein the obtaining the second information includes obtaining the second information indicating a possibility that at least one of the at least one first joint and at least one second joint exists in a virtual two-dimensional space, through a second model receiving the second region (the 3D marker position data is obtained and input into a second texture mapping model in order to map the first and second joints of the plurality of joints into a two dimensional space as depicted in figures 1-2; abstract; figs 1-4; paragraphs [0040-0042]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify ZOHAR to have obtaining the second information includes obtaining the second information indicating the possibility that at least one of at least one the first joint and at least one the second joint exists in virtual two-dimensional space of SHYSHEYA reference. The Suggestion/motivation for doing so would have been to provide the ability to synthesize free viewpoint full-body videos of humans as suggested by SHYSHEYA at paragraph [0039]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine SHYSHEYA with modified ZOHAR to obtain the invention as specified in claim 16.
Claims 10-11 are rejected under 35 § U.S.C. 103 as being obvious over US 2008/0221487 A1 to ZOHAR et al (hereinafter “ZOHAR”) in view of US 10,510,159 B2 to ADACHI et al (hereinafter “ADACHI”) in further view of US 2004/0119716 A1 to PARK et al (hereinafter “PARK”) in further view of US 2016/0317079 A1 to NEWMAN et al. (hereinafter “NEWMAN”).
As per claim 10, ZOHAR in view of ADACHI in view of PARK discloses the electronic device of claim 1 wherein the one or more markers including: a first marker attached to one surface of a body part corresponding to the at least one first joint (the plurality of markers 20 are disposed around the subject’s body and each joint of the body and each sphere in the model depicted in fig 1B represents a joint of the body; figs 1A-1B; paragraphs [0032], [0048-0050]). Modified ZOHAR fails to disclose and a second marker attached to the other surface of the body part facing the one surface and corresponding to the at least one first joint, wherein the second marker has a shape different from the first marker.
NEWMAN discloses wherein the one or more markers includes: and a second marker attached to the other surface of the body part facing the one surface and corresponding to the at least one first joint, wherein the second marker has a shape different from the first marker (the garment to track motion is designed to be made to be worn over a plurality of joints and body part types and may be constructed to fit any sized body part desired, and further the markers disposed on the garment for motion capture are provided as four millimeter (4 mm) and six (6) mm infrared reflecting, spherical markers which are two markers differing in size/shape it is further stated any shape may be used so a rectangle may be substituted for the described spheres; paragraphs [0017-0018], [0051]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify ZOHAR to have the second marker has a shape different from the first marker of NEWMAN reference. The Suggestion/motivation for doing so would have been to allow the use of any size/shape design or configuration of sensor markers as suggested by NEWMAN paragraph [0051]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine NEWMAN with modified ZOHAR to obtain the invention as specified in claim 10.
As per claim 11, ZOHAR in view of ADACHI in view of PARK discloses the electronic device of claim 1. Modified ZOHAR fails to disclose wherein the one or more markers include a plurality of markers disposed on different body part of the body, and wherein a portion of the plurality of markers has an identical shape to one another.
NEWMAN discloses wherein the one or more markers include a plurality of markers disposed on different body part of the body, and wherein a portion of the plurality of markers has an identical shape to one another (the garment to track motion is designed to be made to be worn over a plurality of joints and body part types and may be constructed to fit any sized body part desired, and further the markers disposed on the garment for motion capture are provided as four millimeter (4 mm) and six (6) mm infrared reflecting, spherical markers and is further stated any shape may be used which including matching or different shapes; paragraph [0051]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify ZOHAR to have wherein a part of the plurality of markers has a shape the same as each other of NEWMAN reference. The Suggestion/motivation for doing so would have been to allow the use of any size/shape design or configuration of sensor markers as suggested by NEWMAN paragraph [0051]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine NEWMAN with modified ZOHAR to obtain the invention as specified in claim 11.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVIN JACOB DHOOGE whose telephone number is (571) 270-0999. The examiner can normally be reached 7:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800- 786-9199 (IN USA OR CANADA) or 571-272-1000.
/Devin Dhooge/
USPTO Patent Examiner
Art Unit 2677
/ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677