Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on August 27, 2025 has been entered.
Response to Arguments
Applicant’s arguments and amendments have overcome the 112a and 112b rejections. The remaining issues are addressed below.
Claim Objection
Applicant argues:
the specification clearly states that the all-in-one device includes an image capturing device 4 and a processing device 2
Examiner responds:
Applicant’s reliance on importing terminology from the specification has persuaded the examiner that this is an issue of indefiniteness.
103
Applicant argues:
Clearly, the training architecture of Dhingra uses a mix of “Vision data” and “tactile data” for training, rather than generating corresponding latent features separately.
Examiner responds:
First, the claims do not require that the features be generated separately, the claims only require that the training occur (i.e., according to the claims, training of the first and second latent data) could be the same process. Second, the examiner is unclear on what the underlying technical distinction might be.
Applicant argues:
It is emphasized that the present invention has three neural networks with different functions, and the three neural networks are trained based on specific data flows and training processes with the specific latent features.
Examiner responds:
This sounds very much like Dhingra, Fig. 2.
Claim Objections
Claims 1 and 3 are objected to because of the following informalities:
Claims 1 and 3 recite “a plurality of … lens,” but the plural of “lens” is “lenses.”
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-3, 5-13, and 15-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claims 1 and 11 recite “training a pressure analysis neural network based on pressure sensing training data to generate a first latent feature,” but this is unlimited functional claiming. MPEP 2173.05(g).
Claims 1 and 11 recite “training a vision analysis neural network based on image training data to generate a second latent feature,” but this is unlimited functional claiming. MPEP 2173.05(g).
Claims 1 and 11 recite “training a fusion analysis neural network based on the first latent feature and the second latent feature,” but this is unlimited functional claiming. MPEP 2173.05(g).
Claims 1 and 11 recite “estimating, by the fusion analysis neural network, a body posture tracking corresponding to the user based on the posture image and the pressure sensing values,” but this is unlimited functional claiming. MPEP 2173.05(g).
Dependent claims are likewise rejected.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-3, 5-13, and 15-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1 and 11 recite a “pressure analysis neural network,” a “vision analysis neural network,” and a “fusion analysis neural network,” but each of these are new terminology. MPEP 2173.05(a). The examiner notes that the claimed fusion analysis neural network does not analyze fusion, rather it performs fusion.
Claims 1 and 11 recite a “latent” feature, but this is subjective terminology. MPEP 2173.05(b)(IV).
Claims 1 and 11 recite a “body posture tracking,” but there is a conflict because the word “tracking” is a verb, but is being used as a noun.
Claims 1 and 11 recite “corresponding,” but this is subjective terminology. MPEP 2173.05(b)(IV).
Claims 3 and 13 recite “all-in-one” device, but this is new terminology. MPEP 2173.05(a). Note the remarks of August 27, 2025 arguing that this term should be interpreted by importing a limitation from the specification.
Dependent claims are likewise rejected.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5-7, 10-13, 15-17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pat. Pub. 20230316734 (“Dhingra”) in view of U.S. Pat. Pub. 20190175076 (“Lustig”).
1. (Currently amended) A posture correction system, comprising:
a plurality of depth camera lens, being configured to generate a posture image corresponding to a user; (Dhingra, claim 2. See also, Dhingra, [0026] “The first set of sensors 102 may receive a first set of data which may include depth data and/or RGB data in the form of RGB images and object-surface point cloud from the depth image of a camera, for example.”)
a plurality of pressure sensors being configured to detect a plurality of pressure sensing values; and (Dhingra, claim 3)
a processor, being connected to the plurality of depth camera lenses and the plurality of pressure sensors, and (Dhingra, claim 1, “a processor”)
being configured to perform operations comprising:
receiving the pressure sensing values from the plurality of pressure sensors, (Dhingra, claim 3, see also claim 1 “a second set of sensors receiving a second set of data”)
wherein each of the pressure sensing values corresponds to a body part of the user; (Dhingra, claim 1. The second set of data is used for pose estimation, pose estimation teaches the claimed body part (such as an elbow or knee as per [0001]. See also, Fig. 2)
training a pressure analysis neural network based on pressure sensing training data to generate a first latent feature; (Dhingra, [0041] “The graph-based neural network 204 may learn the complex geometric features of the object, fusing geometric information from the tactile sensor data”)
training a vision analysis neural network based on image training data to generate a second latent feature; (Dhingra, [0041] “Further, the graph-based neural network 204 may learn a way between these edges or how close or how far a tactile point may be to a visual point.”)
training a fusion analysis neural network based on the first latent feature and the second latent feature; (Dhingra, Fig. 2)
estimating, by the fusion analysis neural network, a body posture tracking corresponding to the user based on the posture image and the pressure sensing values; and (Dhingra, claim 1, “passing the set of geometric features and the first set of data through a pose fusion network to generate a first pose estimate associated with the first set of sensors and a second pose estimate associated with the second set of sensors”)
Dhingra is not relied on for the below claim language.
Lustig discloses generating a posture adjustment suggestion based on the body posture tracking (Lustig, abstract, “said processor is configured to instruct said haptic feedback device to provide haptic feedback to the person based on the estimated posture”)
It would have been obvious, before the effective filing date, to combine the pose fusion estimation of Dhingra with the posture improvement of Lustig to improve the posture of a user. See, e.g., Dhingra [0001] stating that this technology can be used for a human.
Based on the above, this is an example of “combining prior art elements according to known methods to yield predictable results.” MPEP 2143.
2. (Original) The posture correction system of claim 1, wherein the processor is further configured to perform following operations:
analyzing the posture image to generate a spatial position corresponding to each of the body parts of the user; and (Dhingra, Fig. 2)
estimating the body posture tracking corresponding to the user based on the spatial positions and the pressure sensing values. (Dhingra, Fig. 2)
3. (Original) The posture correction system of claim 1, wherein the plurality of depth camera lens and the processor are comprised in an all-in-one device, and the all-in-one device is connected to the pressure sensing device. (Dhingra, Fig. 1)
5. (Currently amended) The posture correction system of claim 1,wherein the processor is further configured to perform following operations:
collecting a plurality of first pressure sensing training data and a first label information corresponding to the first pressure sensing training data; and (Dhingra, [0031], “The graph-based neural network may, for example, calculate a number of nearest neighbors and a number of farthest neighbors of each point input to the graph-based neural network.” Dhingra’s choice of neighbors teach the claimed labels.)
training the pressure analysis neural network based on the first pressure sensing training data and the first label information. (Dhingra, [0031], “Thereafter, the graph-based neural network may learn the edge weights.”)
6. (Original) The posture correction system of claim 5, wherein the processor is further configured to perform following operations:
collecting a plurality of first image training data and a second label information corresponding to the first image training data; and (Dhingra, [0031], “The graph-based neural network may, for example, calculate a number of nearest neighbors and a number of farthest neighbors of each point input to the graph-based neural network.” Dhingra’s choice of neighbors teach the claimed labels. Note that [0031] states that this is done for both the image and pressure data.)
training the vision analysis neural network based on the first image training data and the second label information. (Dhingra, [0031], “Thereafter, the graph-based neural network may learn the edge weights.”)
(Original) The posture correction system of claim 6, wherein the processor is further configured to perform following operations:
collecting a plurality of first paired training data and a third label information corresponding to the first paired training data, (Dhingra, [0031], “The graph-based neural network may, for example, calculate a number of nearest neighbors and a number of farthest neighbors of each point input to the graph-based neural network.” Dhingra’s choice of neighbors teach the claimed labels.)
wherein each of the first paired training data comprises a second pressure sensing training data and a second image training data; and (Dhingra, [0031] “the graph-based neural network may combine the depth image information from the first set of data and the object surface contact points from the second set of data”)
training the fusion analysis neural network and fine-tuning the pressure analysis neural network and the vision analysis neural network based on the first paired training data and the third label information. (Dhingra, [0031], “Thereafter, the graph-based neural network may learn the edge weights.” (Dhingra, [0041] provides further detail on fine-tuning.)
Regarding claim 10, the combination of Dhingra and Lustig discloses comparing the body posture tracking with a standard posture to calculate a posture difference value (Lustig ¶77 discloses step 320 a supervised classifier that uses the training data consisting of labeled posture points comprising of sensor readings with correct labels, when the classifier estimates the posture based on the sensor readings, it compares the users posture to the labeled standard posture in the training data, the classifier estimates the users posture and assigns a severity measurement based on how much the posture deviates from the correct labels);.
and generating the posture adjustment suggestion based on the posture difference value (Lustig ¶80 discloses a message sent to the user, these messages are suitable for the estimated postures, sends message and feedback on any issues with posture or and health issues related to bad posture).
Claims 11-13, 15-17 and 20 are rejected for the same reasoning as the corresponding system claims.
Claims 8, 9, 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pat. Pub. 20230316734 (“Dhingra”) in view of U.S. Pat. Pub. 20190175076 (“Lustig”) further in view of “Pytorch Gradient Consistency Loss” Brendan T. Crabb, GitHub, retrieved from https://github.com/btcrabb/GCLoss/tree/main, last updated June 28, 2021 according to https://github.com/btcrabb/GCLoss/commits/main/GCCriterion.py.
(Original) The posture correction system of claim 1, wherein the processor is further configured to perform following operations:
collecting a plurality of second paired training data, wherein each of the second paired training data comprises a third pressure sensing training data and a third image training data; (Dhingra, [0031], “the graph-based neural network may combine the depth image information from the first set of data and the object surface contact points from the second set of data” Dhingra teaches a plurality of data, for the purposes of mapping, some is arbitrarily deemed “first paired training data” and other is “second paired training data.”)
calculating a ((Dhingra, [0031], “the graph-based neural network may combine the depth image information from the first set of data and the object surface contact points from the second set of data”)
training the fusion analysis neural network and fine-tuning the pressure analysis neural network and the vision analysis neural network based on the second paired training data and the functions. (Dhingra, [0031], “Thereafter, the graph-based neural network may learn the edge weights.” (Dhingra, [0041] provides further detail on fine-tuning.)
Dhingra is not relied on for the function being a consistency loss function.
Crabb discloses the function being a consistency loss function (Crabb, “Pytorch Gradient Consistency Loss”)
It would have been obvious, before the effective filing date, to combine the posture system of Dhingra and Lustig with the consistency loss function of Crabb to in order to compute a consistency loss function on medical/ergonomic data. Based on the above findings, this is an example of “Combining prior art elements according to known methods to yield predictable results” (e.g., the software of Dhingra could have been implemented in Python and called the function of Crabb). MPEP 2143.
Claim 9 is rejected with the same mapping as claim 8 (note that claim 9 recites “corresponding to,” meaning that the various neural networks and predicted postures are not required to be present).
Claims 18 and 19 are rejected for the same reasoning as the corresponding system claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US10867218B2 – titled “Biometric sensor fusion to classify vehicle passenger state.” See also, Fig. 8, element 801 “Occupant position.”
US12138042B2 – “extracting a body posture flow of the player from the training video by performing a computer vision algorithm on one or more frames of the training video” (claim 1).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID ORANGE whose telephone number is (571)270-1799. The examiner can normally be reached Mon-Fri, 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 571-272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID ORANGE/ Primary Examiner, Art Unit 2663