19DETAILED ACTION
*Note in the following document:
1. Texts in italic bold format are limitations quoted either directly or conceptually from claims/descriptions disclosed in the instant application.
2. Texts in regular italic format are quoted directly from cited reference or Applicant’s arguments.
3. Texts with underlining are added by the Examiner for emphasis.
4. Texts with
5. Acronym “PHOSITA” stands for “Person Having Ordinary Skill In The Art”.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The disclosure is objected to because of the following informalities: [0032] writes I this example, the frames representing the movement between … (p.14 second last line). It seems the character “I” is intended to be “In”. Appropriate correction is required.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claim(s) 1-19 is/are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1-16 of U.S. Patent No. 12,067,664 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the claim of the instant application is either anticipated by, or the obvious variation of, the claim of U.S. Patent No. 12,067,664 B2, as shown in the table below.
Instant Application:
US 12067664 B2
Claim 1. A computer-implemented method for matching a test frame sequence with a reference frame sequence, the reference frame sequence demonstrating a physical exercise performed by a first human, the method comprising:
receiving a pose data file representing, for at least a subset of frames of the reference frame sequence, based on a list of three-dimensional joint coordinates representing positions of a plurality of joints of a three-dimensional skeleton of the first human in the respective frame, a plurality of two-dimensional skeleton projections onto a virtual spherical surface with a particular joint of the three-dimensional skeleton at the geometric center of the spherical surface, wherein each two-dimensional skeleton projection for a particular frame of the subset corresponds to a two-dimensional reference pose image of the three-dimensional skeleton of the first human from a different viewing angle, with the two-dimensional reference pose image being a characteristic pose of the physical exercise;
receiving the test frame sequence representing movements of a second human while imitating the physical exercise, the test frame sequence captured at a particular angle by a standard RGB camera device;
detecting, with a real-time two-dimensional skeleton detector, a two-dimensional skeleton of the second human in a current test frame of the test frame sequence, wherein the two-dimensional skeleton of the second human is a two-dimensional representation of the pose of the second human in the respective test frame; and
selecting a particular two-dimensional skeleton projection of the first human with the minimum mathematical distance from the two-dimensional skeleton of the second human in the current test frame to match the current pose of the second human in the current test frame with a corresponding reference pose representation in the pose data file, the particular two-dimensional skeleton projection representing the corresponding reference pose at a viewing angle which corresponds to the particular angle of the standard RGB camera device.
Claim 7. The method of claim 1, wherein selecting a particular two-dimensional skeleton projection of the first human further comprises:
providing the two-dimensional skeleton of the second human in the current test frame to a neural network to predict a viewing perspective associated with the current test frame, the neural network trained with a plurality of training frames showing user poses while performing exercises as input, with each training frame being annotated with a corresponding viewing perspective as ground truth; and
selecting the particular two-dimensional skeleton projection of the first human which is located at sphere coordinates that correspond to the predicted viewing perspective.
Claim 1. A computer-implemented method for matching a test frame sequence with a reference frame sequence, the reference frame sequence demonstrating a physical exercise performed by a first human, the method comprising:
receiving a pose data file representing, for at least a subset of frames of the reference frame sequence, based on a list of three-dimensional joint coordinates representing positions of a plurality of joints of a three-dimensional skeleton of the first human in a respective frame of the subset of frames of the reference frame sequence, a plurality of two-dimensional skeleton projections onto a virtual spherical surface with a particular joint of the three-dimensional skeleton at the geometric center of the spherical surface, wherein each two-dimensional skeleton projection for a particular frame of the subset corresponds to a two-dimensional reference pose image of the three-dimensional skeleton of the first human from a different viewing angle, with the two-dimensional reference pose image being a characteristic pose of the physical exercise;
receiving the test frame sequence representing movements of a second human while imitating the physical exercise, the test frame sequence captured at a particular angle by a camera device;
detecting, with a real-time two-dimensional skeleton detector, a two-dimensional skeleton of the second human in a current test frame of the test frame sequence, wherein the two-dimensional skeleton of the second human is a two-dimensional representation of the pose of the second human in the respective test frame; and
selecting a particular two-dimensional skeleton projection of the first human with the minimum mathematical distance from the two-dimensional skeleton of the second human in the current test frame to match the current pose of the second human in the current test frame with a corresponding reference pose representation in the pose data file, the particular two-dimensional skeleton projection representing the corresponding reference pose at a viewing angle which corresponds to the particular angle of the camera device,
wherein selecting the particular two-dimensional skeleton projection of the first human further includes:
providing the two-dimensional skeleton of the second human in the current test frame to a neural network to predict a viewing perspective associated with the current test frame, the neural network trained with a plurality of training frames showing user poses while performing exercises as input, with each training frame being annotated with a corresponding viewing perspective as ground truth; and
selecting the particular two-dimensional skeleton projection of the first human which is located at sphere coordinates that correspond to the predicted viewing perspective.
Claim 2. The method of claim 1, further comprising:
visualizing to the second human the current test frame including a representation of the corresponding two-dimensional skeleton of the second human in near-real-time.
Claim 2. The method of claim 1, further comprising:
visualizing to the second human the current test frame including a representation of the corresponding two-dimensional skeleton of the second human in near-real-time.
Claim 3. The method of claim 1, further comprising:
determining a mathematical distance of the current pose from the corresponding reference pose, wherein the distance is a measure indicating whether the second human is correctly performing the physical exercise; and
in case the physical exercise is not correctly performed, indicating to the second human pose correction feedback on how to correct the current pose.
Claim 3. The method of claim 1, further comprising:
determining a mathematical distance of the current pose from the corresponding reference pose, wherein the distance is a measure indicating whether the second human is correctly performing the physical exercise; and
in case the physical exercise is not correctly performed, indicating to the second human pose correction feedback on how to correct the current pose.
Claim 4. The method of claim 3, wherein the pose data file includes annotations for each frame grouping subsets of joints to corresponding body parts, the method further comprising:
indicating body parts which exceed a predefined critical distance for the current test frame; and
proving feedback with regard to the indicated body parts on how to change the current pose until the current distance for said body parts falls below the critical distance.
Claim 4. The method of claim 3, wherein the pose data file includes annotations for each frame grouping subsets of joints to corresponding body parts, the method further comprising:
indicating body parts which exceed a predefined critical distance for the current test frame; and
proving feedback with regard to the indicated body parts on how to change the current pose until the current distance for said body parts falls below the critical distance.
Claim 5. The method of claim 3, wherein the pose correction feedback is output to the second human as visual information or sound information.
Claim 5. The method of claim 3, wherein the pose correction feedback is output to the second human as visual information or sound information.
Claim 6. The method of claim 1, wherein selecting the particular two-dimensional skeleton projection further comprises:
identifying the corresponding reference pose image by selecting a subgroup of potential corresponding reference pose images.
Claim 6. The method of claim 1, wherein selecting the particular two-dimensional skeleton projection further comprises:
identifying the corresponding reference pose image by selecting a subgroup of potential corresponding reference pose images.
Claim 7. The method of claim 1, wherein selecting a particular two-dimensional skeleton projection of the first human further comprises:
providing the two-dimensional skeleton of the second human in the current test frame to a neural network to predict a viewing perspective associated with the current test frame, the neural network trained with a plurality of training frames showing user poses while performing exercises as input, with each training frame being annotated with a corresponding viewing perspective as ground truth; and
selecting the particular two-dimensional skeleton projection of the first human which is located at sphere coordinates that correspond to the predicted viewing perspective.
See Claim 1 above
Claim 8. A computer system for matching a test frame sequence with a reference frame sequence, the reference frame sequence demonstrating a physical exercise performed by a first human, the system comprising:
an interface configured to receive a pose data file representing for at least a subset of frames of the reference frame sequence, based on a list of three-dimensional joint coordinates representing positions of a plurality of joints of a three-dimensional skeleton of the first human in the respective frame, a plurality of two-dimensional skeleton projections on a virtual spherical surface with a particular joint of the three-dimensional skeleton at the geometric center of the spherical surface, wherein each two-dimensional skeleton projection on the spherical surface for a particular frame of the subset corresponds to a two-dimensional reference pose image of the three-dimensional skeleton of the first human from a different viewing angle, with the two-dimensional reference pose image being a characteristic pose of the physical exercise, and further configured to receive the test frame sequence representing movements of a second human while imitating the physical exercise, the test frame sequence captured at a particular angle by a standard RGB camera device;
a real-time two-dimensional skeleton detector module configured to detect a two-dimensional skeleton of the second human in a current test frame of the test frame sequence, wherein the two-dimensional skeleton of the second human is a two-dimensional representation of the pose of the second human in the respective test frame; and
a pose matching module configured to select a particular two-dimensional skeleton projection of the first human with the minimum mathematical distance from the two-dimensional skeleton of the second human in the current test frame to match the current pose of the second human in the current test frame with a corresponding reference pose representation of the pose data file, the particular two-dimensional skeleton projection representing the corresponding reference pose at a viewing angle which corresponds to the particular angle of the standard RGB camera device.
Claim 14. The system of claim 8, wherein the pose matching module further comprises:
a neural network with an input layer (IL) to receive a representation of the two-dimensional skeleton as a test input, and an output layer (OL) to predict a viewing perspective associated with the received test input, the neural network trained with a plurality of training frames showing user poses while performing exercises as input, with each training frame being annotated with a corresponding viewing perspective as ground truth, wherein the predicted viewing perspective represents a pointer to the particular two-dimensional skeleton projection of the first human to be selected.
Claim 7. A computer system for matching a test frame sequence with a reference frame sequence, the reference frame sequence demonstrating a physical exercise performed by a first human, the system comprising:
an interface configured to receive a pose data file representing for at least a subset of frames of the reference frame sequence, based on a list of three-dimensional joint coordinates representing positions of a plurality of joints of a three-dimensional skeleton of the first human in a respective frame of the subset of frames of the reference frame sequence, a plurality of two-dimensional skeleton projections on a virtual spherical surface with a particular joint of the three-dimensional skeleton at the geometric center of the spherical surface, wherein each two-dimensional skeleton projection on the spherical surface for a particular frame of the subset corresponds to a two-dimensional reference pose image of the three-dimensional skeleton of the first human from a different viewing angle, with the two-dimensional reference pose image being a characteristic pose of the physical exercise, and further configured to receive the test frame sequence representing movements of a second human while imitating the physical exercise, the test frame sequence captured at a particular angle by a camera device;
a real-time two-dimensional skeleton detector module configured to detect a two-dimensional skeleton of the second human in a current test frame of the test frame sequence, wherein the two-dimensional skeleton of the second human is a two-dimensional representation of the pose of the second human in the respective test frame; and
a pose matching module configured to select a particular two-dimensional skeleton projection of the first human with the minimum mathematical distance from the two-dimensional skeleton of the second human in the current test frame to match the current pose of the second human in the current test frame with a corresponding reference pose representation of the pose data file, the particular two-dimensional skeleton projection representing the corresponding reference pose at a viewing angle which corresponds to the particular angle of the camera device,
wherein the pose matching module includes:
a neural network with an input layer (IL) to receive a representation of the two-dimensional skeleton as a test input, and an output layer (OL) to predict a viewing perspective associated with the received test input, the neural network trained with a plurality of training frames showing user poses while performing exercises as input, with each training frame being annotated with a corresponding viewing perspective as ground truth, wherein the predicted viewing perspective represents a pointer to the particular two-dimensional skeleton projection of the first human to be selected.
Claim 9. The system of claim 8, further comprising:
a visualizer function configured to visualize to the second human the current test frame including a representation of the corresponding two-dimensional skeleton of the second human in near-real-time.
Claim 8. The system of claim 7, further comprising:
a visualizer function configured to visualize to the second human the current test frame including a representation of the corresponding two-dimensional skeleton of the second human in near-real-time.
Claim 10. The system of claim 8, further comprising a pose checking module configured to:
determine a mathematical distance of the current pose to the corresponding reference pose, wherein the mathematical distance is a measure indicating whether the second human is correctly performing the physical exercise; and
in case the physical exercise is not correctly performed, to indicate to the second human pose correction feedback on how to correct the current pose.
Claim 9. The system of claim 7, further comprising a pose checking module configured to:
determine a mathematical distance of the current pose to the corresponding reference pose, wherein the mathematical distance is a measure indicating whether the second human is correctly performing the physical exercise; and
in case the physical exercise is not correctly performed, to indicate to the second human pose correction feedback on how to correct the current pose.
Claim 11. The system of claim 8, wherein the pose data file includes annotations for each frame grouping subsets of joints to corresponding body parts, the pose checking module further configured to:
indicate body parts which exceed a predefined critical distance for the current test frame; and
provide feedback with regard to the indicated body parts on how to change the current pose until the current distance for said body parts falls below the critical distance.
Claim 10. The system of claim 7, wherein the pose data file includes annotations for each frame grouping subsets of joints to corresponding body parts, the pose matching module further configured to:
indicate body parts which exceed a predefined critical distance for the current test frame; and
provide feedback with regard to the indicated body parts on how to change the current pose until the current distance for said body parts falls below the critical distance.
Claim 12. The system of claim 8, wherein the pose matching module further comprises a normalizer function configured to transform the detected 2D skeleton into a normalized two dimensional skeleton in that each bone length of the detected 2D skeleton is divided by the height of the two-dimensional skeleton.
Claim 11. The system of claim 7, wherein the pose matching module further comprises a normalizer function configured to transform the detected 2D skeleton into a normalized two-dimensional skeleton in that each bone length of the detected 2D skeleton is divided by the height of the two-dimensional skeleton.
Claim 13. The system of claim 8, wherein the virtual spherical surface is normalized with a radius equal to one, with each two-dimensional skeleton projection being a normalized two-dimensional representation of the three-dimensional skeleton from a different viewing angle.
12. The system of claim 7, wherein the virtual spherical surface is normalized with a radius equal to one, with each two-dimensional skeleton projection being a normalized two-dimensional representation of the three-dimensional skeleton from a different viewing angle.
Claim 14. The system of claim 8, wherein the pose matching module further comprises:
a neural network with an input layer (IL) to receive a representation of the two-dimensional skeleton as a test input, and an output layer (OL) to predict a viewing perspective associated with the received test input, the neural network trained with a plurality of training frames showing user poses while performing exercises as input, with each training frame being annotated with a corresponding viewing perspective as ground truth, wherein the predicted viewing perspective represents a pointer to the particular two-dimensional skeleton projection of the first human to be selected.
See Claim 7 above
Claim 15. A computer program product for matching a test frame sequence with a reference frame sequence, the reference frame sequence demonstrating a physical exercise performed by a first human, wherein the computer program product, when loaded into a memory of a computing device and executed by at least one processor of the computing device, causes the at least one processor to:
receive a pose data file representing for at least a subset of frames of the reference frame sequence, based on a list of three-dimensional joint coordinates representing positions of a plurality of joints of a three-dimensional skeleton of the first human in the respective frame, a plurality of two-dimensional skeleton projections onto a virtual spherical surface with a particular joint of the three-dimensional skeleton at the geometric center of the spherical surface, wherein each two-dimensional skeleton projection for a particular frame of the subset corresponds to a two-dimensional reference pose image of the three-dimensional skeleton of the first human from a different viewing angle, with the two-dimensional reference pose image being a characteristic pose of the physical exercise;
receive the test frame sequence representing movements of a second human while imitating the physical exercise, the test frame sequence captured at a particular angle by a standard RGB camera device;
detect, with a real-time two-dimensional skeleton detector, a two-dimensional skeleton of the second human in a current test frame of the test frame sequence, wherein the two-dimensional skeleton of the second human is a two-dimensional representation of the pose of the second human in the respective test frame; and
select a particular two-dimensional skeleton projection of the first human with the minimum mathematical distance from the two-dimensional skeleton of the second human in the current test frame to match the current pose of the second human in the current test frame with a corresponding reference pose representation in the pose data file, the particular two-dimensional skeleton projection representing the corresponding reference pose at a viewing angle which corresponds to the particular angle of the standard RGB camera device.
Claim 19. The computer program product of claim 15, wherein the computer program product, when loaded into the memory of the computing device and executed by the at least one processor of the computing device, causes the at least one processor to select a particular two-dimensional skeleton projection of the first human by:
providing the two-dimensional skeleton of the second human in the current test frame to a neural network to predict a viewing perspective associated with the current test frame, the neural network trained with a plurality of training frames showing user poses while performing exercises as input, with each training frame being annotated with a corresponding viewing perspective as ground truth; and
selecting the particular two-dimensional skeleton projection of the first human which is located at sphere coordinates that correspond to the predicted viewing perspective.
Claim 13. A non-transitory computer program product for matching a test frame sequence with a reference frame sequence, the reference frame sequence demonstrating a physical exercise performed by a first human, wherein the computer program product, when loaded into a memory of a computing device and executed by at least one processor of the computing device, causes the at least one processor to:
receive a pose data file representing for at least a subset of frames of the reference frame sequence, based on a list of three-dimensional joint coordinates representing positions of a plurality of joints of a three-dimensional skeleton of the first human in a respective frame of the subset of frames of the reference frame sequence, a plurality of two-dimensional skeleton projections onto a virtual spherical surface with a particular joint of the three-dimensional skeleton at the geometric center of the spherical surface, wherein each two-dimensional skeleton projection for a particular frame of the subset corresponds to a two-dimensional reference pose image of the three-dimensional skeleton of the first human from a different viewing angle, with the two-dimensional reference pose image being a characteristic pose of the physical exercise;
receive the test frame sequence representing movements of a second human while imitating the physical exercise, the test frame sequence captured at a particular angle by a camera device;
detect, with a real-time two-dimensional skeleton detector, a two-dimensional skeleton of the second human in a current test frame of the test frame sequence, wherein the two-dimensional skeleton of the second human is a two-dimensional representation of the pose of the second human in the respective test frame; and
select a particular two-dimensional skeleton projection of the first human with the minimum mathematical distance from the two-dimensional skeleton of the second human in the current test frame to match the current pose of the second human in the current test frame with a corresponding reference pose representation in the pose data file, the particular two-dimensional skeleton projection representing the corresponding reference pose at a viewing angle which corresponds to the particular angle of the camera device,
wherein selecting the particular two-dimensional skeleton projection of the first human includes:
providing the two-dimensional skeleton of the second human in the current test frame to a neural network to predict a viewing perspective associated with the current test frame, the neural network trained with a plurality of training frames showing user poses while performing exercises as input, with each training frame being annotated with a corresponding viewing perspective as ground truth; and
selecting the particular two-dimensional skeleton projection of the first human which is located at sphere coordinates that correspond to the predicted viewing perspective.
Claim 16. The computer program product of claim 15, wherein the computer program product, when loaded into the memory of the computing device and executed by the at least one processor of the computing device, causes the at least one processor to:
visualize to the second human the current test frame including a representation of the corresponding two-dimensional skeleton of the second human in near-real-time.
Claim 14. The non-transitory computer program product of claim 13, wherein the computer program product, when loaded into the memory of the computing device and executed by the at least one processor of the computing device, causes the at least one processor to:
visualize to the second human the current test frame including a representation of the corresponding two-dimensional skeleton of the second human in near-real-time.
Claim 17. The computer program product of claim 15, wherein the computer program product, when loaded into the memory of the computing device and executed by the at least one processor of the computing device, causes the at least one processor to:
determine a mathematical distance of the current pose from the corresponding reference pose, wherein the distance is a measure indicating whether the second human is correctly performing the physical exercise; and
in case the physical exercise is not correctly performed, indicate to the second human pose correction feedback on how to correct the current pose.
Claim 15. The non-transitory computer program product of claim 13, wherein the computer program product, when loaded into the memory of the computing device and executed by the at least one processor of the computing device, causes the at least one processor to:
determine a mathematical distance of the current pose from the corresponding reference pose, wherein the distance is a measure indicating whether the second human is correctly performing the physical exercise; and
in case the physical exercise is not correctly performed, indicate to the second human pose correction feedback on how to correct the current pose.
Claim 18. The computer program product of claim 15, wherein the computer program product, when loaded into the memory of the computing device and executed by the at least one processor of the computing device, causes the at least one processor to select the two-dimensional skeleton projection including
identifying the corresponding reference pose image by selecting a subgroup of potential corresponding reference pose images.
Claim 16. The non-transitory computer program product of claim 13, wherein the computer program product, when loaded into the memory of the computing device and executed by the at least one processor of the computing device, causes the at least one processor to select the two-dimensional skeleton projection including
identifying the corresponding reference pose image by selecting a subgroup of potential corresponding reference pose images.
Claim 19. The computer program product of claim 15, wherein the computer program product, when loaded into the memory of the computing device and executed by the at least one processor of the computing device, causes the at least one processor to select a particular two-dimensional skeleton projection of the first human by:
providing the two-dimensional skeleton of the second human in the current test frame to a neural network to predict a viewing perspective associated with the current test frame, the neural network trained with a plurality of training frames showing user poses while performing exercises as input, with each training frame being annotated with a corresponding viewing perspective as ground truth; and
selecting the particular two-dimensional skeleton projection of the first human which is located at sphere coordinates that correspond to the predicted viewing perspective.
See Claim 13 above
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 15-19 is/are rejected under 35 U.S.C. 101 as not falling within one of the four statutory categories of invention.
Claim(s) 15-19 claim(s) a computer-program product. The specification for the computer-program product define the CPP as A computer program product can be tangibly embodied in an information carrier. The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 904, the storage device 906, or memory on processor 902 ([0063]) and In one implementation, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 964, expansion memory 984, or memory on processor 952, that may be received, for example, over transceiver 968 or external interface 962 ([0070]). MPEP 2106.3 writes
a Non-limiting examples of claims that are not directed to any of the statutory categories include:
• Products that do not have a physical or tangible form, such as information (often referred to as "data per se") or a computer program per se (often referred to as "software per se") when claimed as a product without any structural recitations;
• Transitory forms of signal transmission (often referred to as "signals per se"), such as a propagating electrical or electromagnetic signal or carrier wave;
…
Even when a product has a physical or tangible form, it may not fall within a statutory category. For instance, a transitory signal, while physical and real, does not possess concrete structure that would qualify as a device or part under the definition of a machine, is not a tangible article or commodity under the definition of a manufacture (even though it is man-made and physical in that it exists in the real world and has tangible causes and effects), and is not composed of matter such that it would qualify as a composition of matter. Nuijten, 500 F.3d at 1356-1357, 84 USPQ2d at 1501-03. As such, a transitory, propagating signal does not fall within any statutory category. Mentor Graphics Corp. v. EVE-USA, Inc., 851 F.3d 1275, 1294, 112 USPQ2d 1120, 1133 (Fed. Cir. 2017); Nuijten, 500 F.3d at 1356-1357, 84 USPQ2d at 1501-03.
Since the specification fails to limit the product to non-transitory/ The computer program product fails to fall within a statutory category.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: a real-time two-dimensional skeleton detector module configured to detect …, a pose matching module configured to select … in Claim 8; a pose checking module configured to … in Claim 10; the pose checking module in Claim 11; the pose matching module in Claim 12 and the pose matching module in Claim 14.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 1-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Independent Claim 1 recites the limitation receiving a pose data file representing, for at least a subset of frames of the reference frame sequence, based on a list of three-dimensional joint coordinates representing positions of a plurality of joints of a three-dimensional skeleton of the first human in the respective frame in lines 1-4; the pose of the second human in the respective test frame in line 19 and the current pose of the second human in Claim 22. There is insufficient antecedent basis for this limitation in the claim.
Independent Claim 8/15 comprise(s) the same phrase. Therefore same rationale is applied to Claim 8/15.
Other dependent claims are rejected due to their dependency on their respectively independent claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-4, 6, 8-11, 15-17 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (CN 110796077 A) in view of Xia et al. (View invariant human action recognition using histograms of 3D joints”, Computer Vision and Pattern Recognition workshops (CVPRW), 2012 IEEE Computer Society Conference on, IEEE, June 16, 2012, pp.20-27).
Regarding Claim 1, Li teaches or suggests a computer-implemented method ([0002]: This invention relates to the field of computers, and more specifically to a method for real-time detection and correction of posture and motion) for matching a test frame sequence with a reference frame sequence, the reference frame sequence demonstrating a physical exercise performed by a first human ([0008]: Step 1: Capture user motion video streams using cameras, then push the streams to the server, where standard motion video is stored. [0004]: Radio calisthenics is a well-known sport with many participants, and it is especially important for the exercise of Chinese teenagers during breaks), the method comprising:
receiving a pose data file representing, for at least a subset of frames of the reference frame sequence ([0009]: Step 2: Encode and slice the user's motion video and standard motion video respectively to generate frame files, and generate a corresponding index file), based on a list of three-dimensional joint coordinates representing positions of a plurality of joints of ([0010]: Step 3: Convert user motion videos and standard motion videos into skeleton data. [0017]: User motion videos and standard motion videos are cut into image sequences. Each image has a skeleton annotation, and a corresponding data file is generated. The skeleton annotation is the annotation information at each joint. [0044]: Then, the skeleton analysis algorithm will cut the video into a sequence of images. Each image has a skeleton annotation of 18 joints and will generate a corresponding JSON file. Each joint information includes three pieces of information (x, y, score). x and y are the coordinate information in the image, with a value range of (0, image.size), where image.size represents the image size, and score represents the predicted score, which has been normalized and has a value range of (0, 1). The closer the value is to 1, the more accurate the prediction, the higher the degree of joint reconstruction, and the higher the degree of pose reconstruction. Li teaches the skeleton is for pose reconstruction);
receiving the test frame sequence representing movements of a second human while imitating the physical exercise, the test frame sequence captured at a particular angle by a standard RGB camera device ([0017]: User motion videos and standard motion videos are cut into image sequences. Each image has a skeleton annotation, and a corresponding data file is generated. [0040]: First, Ffmpeg is used to capture user motion video streams via computer or mobile phone camera and upload them to the Ngnix streaming media server in real time. Li does not explicitly recite the frame sequences are captured by a standard RGB camera device. However Li explicitly disclose the video can be captured by a computer or mobile phone camera. A skilled person would have known that a computer or a mobile camera includes a standard RGB camera senor. Therefore it would have been obvious to a PHOSITA before the effective filing date of the claimed invention that the test frame sequence captured at a particular angle by a standard RGB camera device in order to reduce the production cost);
detecting, with a real-time two-dimensional skeleton detector, a two-dimensional skeleton of the second human in a current test frame of the test frame sequence, wherein the two-dimensional skeleton of the second human is a two-dimensional representation of the pose of the second human in the respective test frame ([0039]: As shown in Figure 1, the present invention provides a method for real-time detection and correction of posture and motion. [0013]: … Calculate the similarity of the matched keyframes to obtain an evaluation score, and obtain a real-time posture evaluation to assist the user in adjusting their posture and completing posture and motion detection and correction. [0066]: To achieve a real-time effect, this embodiment performs a calculation every 3 seconds for approximately 90 frames of data. Users can view their similarity score in real time while moving); and
selecting a particular two-dimensional skeleton projection of the first human with the minimum mathematical distance from the two-dimensional skeleton of the second human in the current test frame to match the current pose of the second human in the current test frame with a corresponding reference pose representation in the pose data file, the particular two-dimensional skeleton projection representing the corresponding reference pose at a viewing angle which corresponds to the particular angle of the standard RGB camera device ([0012]: Step 5: Perform keyframe matching on the features extracted from the two motion videos. [0013]: Step 6: Calculate the similarity of the matched keyframes to obtain an evaluation score, and obtain a real-time posture evaluation to assist the user in adjusting their posture and completing posture and motion detection and correction. [0064]: Finally, the average difference between the matching frames is calculated using the average distance algorithm, and then the actual score is obtained through the mapping function. The difference cited by Li is interpreted as a distance. Li indirectly teaches selecting a particular two-dimensional skeleton projection of the first human with the minimum mathematical distance from the two-dimensional skeleton of the second human in the current test frame to match the current pose of the second human in the current test frame with a corresponding reference pose representation in the pose data file since the evaluation score is calculated based on the matched keyframes which share minimum distance).
Li fails to disclose a plurality of two-dimensional skeleton projections onto a virtual spherical surface with a particular joint of the three-dimensional skeleton at the geometric center of the spherical surface, wherein each two-dimensional skeleton projection for a particular frame of the subset corresponds to a two-dimensional reference pose image of the three-dimensional skeleton of the first human from a different viewing angle.
In the same field of human action recognition, Xia teaches a plurality of two-dimensional skeleton projections onto a virtual spherical surface with a particular joint of the three-dimensional skeleton at the geometric center of the spherical surface, wherein each two-dimensional skeleton projection for a particular frame of the subset corresponds to a two-dimensional reference pose image of the three-dimensional skeleton of the first human from a different viewing angle, with the two-dimensional reference pose image being a characteristic pose of the physical exercise (e.g. a representation of 3D human posture using a spherical coordinate system- page 21, left column, first full paragraph), where a posture is a set of 3D locations of skeletal joints (page 22, section 3, first paragraph, right column). By applying the teachings of Xia, the skilled person would address the above technical problem by providing two-dimensional skeleton projections onto a virtual spherical surface (e.g. mapping the 3D locations of the joints onto a reference spherical coordinate system) with a particular joint (e.g. the hip center (pelvis) according to Xia) of the three-dimensional skeleton at the geometric center of the spherical surface. Each two-dimensional skeleton projection for a particular frame of the sub-set corresponds to a two-dimensional reference pose image of the three-dimensional skeleton of the first human from a different viewing angle. Also, providing a number of references from different angles is considered to be an obvious to one skilled as it simply repeats the steps of the creating the first viewing angle. One of ordinary skill in the art could have a reasonable expectation of success to combine two like systems where both systems use human motion recognition
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, to modify the invention of Li with the features of spherical projections as taught by Xia. The motivation would have been to achieve excellent recognition rates and is also view invariant and real time (paragraph 22, right column, 2nd paragraph).
Regarding Claim 2, Li teaches or suggests visualizing to the second human the current test frame including a representation of the corresponding two-dimensional skeleton of the second human in near-real-time ([0008]-[0010]: Step 1: Capture user motion video streams using cameras, then push the streams to the server, where standard motion video is stored. Step 2: Encode and slice the user's motion video and standard motion video respectively to generate frame files, and generate a corresponding index file. The index file records the playback order of the corresponding motion video, i.e., the frame file, and is updated in real time. Step 3: Convert user motion videos and standard motion videos into skeleton data).
Regarding Claim 3, Li teaches or suggests determining a mathematical distance of the current pose from the corresponding reference pose, wherein the distance is a measure indicating whether the second human is correctly performing the physical exercise; and in case the physical exercise is not correctly performed, indicating to the second human pose correction feedback on how to correct the current pose ([0064]: Finally, the average difference between the matching frames is calculated using the average distance algorithm, and then the actual score is obtained through the mapping function. To achieve a real-time effect, this embodiment performs a calculation every 3 seconds for approximately 90 frames of data. Users can view their similarity score in real time while moving. After the movement is completed, the system compares all movement sequences again to obtain the final score, which helps users adjust their posture and complete posture and movement detection and correction).
Regarding Claim 4, Li teaches or suggests wherein the pose data file includes annotations for each frame grouping subsets of joints to corresponding body parts ([0017]: User motion videos and standard motion videos are cut into image sequences. Each image has a skeleton annotation, and a corresponding data file is generated. The skeleton annotation is the annotation information at each joint. Each joint information includes three pieces of information: (x, y, score).x and y are the coordinate information of each joint, with a value range of (0, image.size). score represents the predicted score, with a value range of (0, 1).image.size represents the image size), the method further comprising:
indicating body parts which exceed a predefined critical distance for the current test frame; and proving feedback with regard to the indicated body parts on how to change the current pose until the current distance for said body parts falls below the critical distance ([0013]-[0014]: Calculate the similarity of the matched keyframes to obtain an evaluation score, and obtain a real-time posture evaluation to assist the user in adjusting their posture and completing posture and motion detection and correction. This real-time posture and movement detection and correction method is simple and can quickly judge and correct human posture).
Regarding Claim 6, Li further teaches or suggests wherein selecting the particular two-dimensional skeleton projection further comprises: identifying the corresponding reference pose image by selecting a subgroup of potential corresponding reference pose images ([0017]: User motion videos and standard motion videos are cut into image sequences. [0027]: If a frame Ui in sequence U and a frame Vj in sequence V correspond to the same element in the optimal path, then Ui and Vj are corresponding frames, that is, Ui and Vj are matched).
Regarding Claims 8-11, Claims 8-11 are similar to Claims 1-4 except in the format of system. Therefore the same reason(s) for rejections are applied to Claims 1-4 are also applied to Claims 8-11.
Regarding Claims 15-17 and 18, Claims 15-17 and 18 are similar to Claims 1-4 and 6 except in the format of computer program product. Therefore the same reason(s) for rejections are applied to Claims 1-4 and 6 are also applied to Claims 15-17 and 18.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (CN 110796077 A) in view of Xia et al. (“View invariant human action recognition using histograms of 3D joints”, Computer Vision and Pattern Recognition workshops (CVPRW), 2012 IEEE Computer Society Conference on, IEEE, June 16, 2012, pp.20-27) as applied to Claim 3 above, and further in view of Liao et al. (“A review of Computational Approaches for Evaluation of Rehabilitation Exercises”, Computers in Biology and Medicine, Vol. 119, March 4, 2020, 15 pages).
Regarding Claim 5, Li as modified fails to disclose wherein the pose correction feedback is output to the second human as visual information or sound information.
However Liao, in the same field of endeavor, discloses wherein the pose correction feedback is output to the second human as visual information or sound information (p.2 second paragraph last 7 lines: These systems employ a Kinect sensor to track patient movements, where a user interface displays two avatars that perform the prescribed exercise by the clinician and the ongoing movements and postures performed by the patient in real-time. Such visual feedback assists patients in improving their exercise performance, as well as in taking self-corrective action when needed [23]. Furthermore, the recordings of the daily exercise sessions can be sent via the internet to the respective clinician, who can assess the performance and provide feedback or corrective recommendations. P.22 first paragraph: Providing a qualitative or quantitative evaluation score of rehabilitation exercise to patients is far from sufficient to support effective implementation of at-home rehabilitation programs. The integration of voice assistants for conveying evaluation feedback to the patients can greatly improve the efficiency and user-friendliness of these types of systems. For instance, an integrated voice assistant (similar to Alexa or Google’s voice assistant) can instruct the patient on the sequence of movements to perform or the correctness of the posture during a practice session, as well as provide suggestions on how to improve the exercise quality or which aspects of the movements are not performed correctly).
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (CN 110796077 A) in view of Xia et al. (“View invariant human action recognition using histograms of 3D joints”, Computer Vision and Pattern Recognition workshops (CVPRW), 2012 IEEE Computer Society Conference on, IEEE, June 16, 2012, pp.20-27) as applied to Claim 8 above, and further in view of Vaezi Joze et al. (US 2019/0294871 A1).
Regarding Claim 12, Li modified by Xia fails to disclose wherein the pose matching module further comprises a normalizer function configured to transform the detected 2D skeleton into a normalized two dimensional skeleton in that each bone length of the detected 2D skeleton is divided by the height of the two-dimensional skeleton.
However Vaezi Joze, in the same field of endeavor, discloses a PHOSITA before the effective filing date of the claimed invention had already known to normalize a 2D skeleton by dividing the 2D skeleton by the height of the 2D skeleton ([0035]: In an example, the skeleton trajectory generator 102 may use a skeleton generator network 120 for generating the skeleton trajectory 152. … In an example, each of the skeletons of the skeletons trajectory 152 may comprise a predetermined number of joints (e.g., 18 joints) and may be represented by a predetermined vector (e.g., 1×36). Further, coordinates of each of the skeletons of the skeleton trajectory 152 may be normalized. For example, the coordinates of each of the skeletons of the skeleton trajectory 152 may be normalized by dividing the coordinates by a height and width of an original image). Therefore it would have been obvious to a PHOSITA before the effective filing date to incorporate the teaching of Vaezi Joze into that of Li modified by Xia and to include the limitation of wherein the pose matching module further comprises a normalizer function configured to transform the detected 2D skeleton into a normalized two dimensional skeleton in that each bone length of the detected 2D skeleton is divided by the height of the two-dimensional skeleton in order to put all features on a similar scale such as [0, 1] to ensure data is accurate and consistent across the reference database, which is a common practice to organize data obtained from different data source.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (CN 110796077 A) in view of Xia et al. (“View invariant human action recognition using histograms of 3D joints”, Computer Vision and Pattern Recognition workshops (CVPRW), 2012 IEEE Computer Society Conference on, IEEE, June 16, 2012, pp.20-27) as applied to Claim 8 above, and further in view of Yen et al. (“DETERMINING 3-D MOTION AND STRUCTURE OF A RIGID BODY USING STRAIGHT LINE CORRESPONDENCES”, ICASSP 83, BOSTON).
Regarding Claim 13, Li modified by Xia discloses projecting a 2D skeleton onto a 3D sphere (see Xiao p.22 Fig.3). But Li modified by Xia fails to explicitly disclose wherein the virtual spherical surface is normalized with a radius equal to one, with each two-dimensional skeleton projection being a normalized two-dimensional representation of the three-dimensional skeleton from a different viewing angle.
However Yen, in the same field of endeavor, discloses projecting 2D skeleton onto a unit-sphere (p.1 left column first paragraph: This paper describes the main results obtained for the determination of 3-D motion and structure of a rigid body containing straight line segments, given a sequence of (2-D) perspective views [1]. The analysis is based on the representation of the central projection of a 3-D line on the unit sphere). Therefore it would have been obvious to a PHOSITA before the effective filing date to incorporate the teaching of Yen into that of Li modified by Xia and to include above limitation in order for the determination of 3D motion and structure of a rigid body from a sequence of 2D perspective view as suggested by Yen as cited above.
Allowable Subject Matter
Claims 7, 14 and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Prior art, either individually or in combination, fails to disclose or render obviousness the limitation of providing the two-dimensional skeleton of the second human in the current test frame to a neural network to predict a viewing perspective associated with the current test frame, the neural network trained with a plurality of training frames showing user poses while performing exercises as input, with each training frame being annotated with a corresponding viewing perspective as ground truth; and selecting the particular two-dimensional skeleton projection of the first human which is located at sphere coordinates that correspond to the predicted viewing perspective as claimed in dependent claim 7. The closest prior art, Yang et al. (CN 110097008 A), discloses using neural network to match a reference motion of frame to a test motion of frame (Abstract: The invention claims a gesture recognition method, comprising orthonormal coordinate system, three-dimensional coordinates to motion sample each frame performing standardization treatment of bone joint; design order from recurrent neural network, extracting each frame bone joint point coordinate feature. obtaining the feature vector of each frame from the training sample of each action type selecting one as the action type of the reference sample, the test action sample in each frame and each reference action sample, establishing timing corresponding to each frame for matching relation; matching cost of calculating the test action sample and each reference action sample, finding out the reference sample and test action sample with the smallest matching cost of action type, action type of the reference sample is the test motion sample). However, it fails to disclose above cited limitation. Similar limitations are recited in dependent Claim 14/19, therefore the same above rationale is applied.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YINGCHUN HE whose telephone number is (571)270-7218. The examiner can normally be reached M-F 8:00-5:00 MT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao M Wu can be reached at 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YINGCHUN HE/Primary Examiner, Art Unit 2613