Detailed Action
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-15, 17, 18, 20, 21 and 23-34 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. See Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 134 S. Ct. 2347 (2014). The claim(s) recite(s) a series of mathematical computations, which can be practicably performed by a human using only their mind, pen and paper.
The claim(s) recite(s), inter alia,
receive image data about a scene, wherein image data comprises one or more images of the scene
wherein the one or more images comprise a plurality of pixels, the pixels having pixel coordinates in the one or more images
wherein the image data includes depictions of an alignment device and a target in the scene
translating a plurality of the pixel coordinates applicable to the alignment device to 3D coordinates in a frame of reference based on a spatial model of the scene
determining an orientation of the alignment device relative to the frame of reference based on the translating
generating alignment data based on the determined alignment device orientation, wherein the generated alignment data is indicative of a relative alignment for the alignment device in the scene with respect to a golf shot for striking a golf ball toward the target
generating feedback indicative of the alignment data for presentation to a user
Under the broadest reasonable interpretation, claims 1, 29 and 30 recite limitations performable in the human mind. A human—using only their mind, pen, and paper—is capable of receiving image data about a scene comprising one or more images, the images containing coordinated pixels, an alignment device and a target, translating a plurality of the pixel coordinates applicable to the alignment device to 3D coordinates in a frame of reference based on a spatial model of the scene, determining an orientation of the alignment device relative to the frame of reference based on the translating, generating alignment data based on the determined alignment device orientation, wherein the generated alignment data is indicative of a relative alignment for the alignment device in the scene with respect to a golf shot for striking a golf ball toward the target, and generating feedback indicative of the alignment data for presentation to a user.
The abstract idea is not integrated into a practical application. Claims 1, 29 and 30 recite the limitation “one or more processors,” “a non-transitory computer-readable storage medium,” “one or more cameras,” a “mobile device,” a “launch monitor,” “VR equipment,” a “range finder,” and a “computer vision comprising a trained machine learning classifier.” Specifically, these additional elements, when considered individually or in combination, are not integrated into a practical application because:
One or more processors: is not specifically described in the specification. Instead, the processors are described as part of a mobile device:
The mobile device 300 of FIG. 3A can be a smart phone (e.g., an iPhone, a Google Android device, a Blackberry device, etc.), tablet computer (e.g., an iPad), wearable device (e.g., VR equipment such as VR goggles, VR glasses, or VR headsets), or the like.
Therefore, it would accordingly be reasonable to interpret this feature as a routine and conventional computing component.
A non-transitory computer-readable storage medium: Again, this feature is not specifically described in the specification, merely “such as a computer memory.” Therefore, it is also fairly construed as routine and conventional.
One or more cameras, mobile device, launch monitor, VR equipment, range finder:
Again, these features are described only cursorily in the specification. Therefore, they are also fairly construed as routine and conventional.
Computer vision comprising a trained machine learning classifier: is described in the specification as:
The object recognition can be based on machine learning (ML) techniques where a classifier is trained using known images of alignment devices to detect the presence of an alignment device in an image. An example of ML techniques that can be used in this regard include YOLOX and convolutional neural networks (CNNs) that are trained to recognize alignment device. … The object recognition can be based on machine learning
(ML) techniques where a classifier is trained using known images of target indicators such as hole flags to detect the presence of a hole flag in an image. An example of ML techniques that can be used in this regard include convolutional neural networks (CNNs) that are trained to recognize hole flags (and golf balls).
Thus, the language model described is of a conventional type, storing information in a structured way using rules and pathways to identify trends and tendencies in the data and respond to queries in a human-like way. The storage and organization techniques used in its construction and application are mathematical algorithms analogous to those of Example 2, Claim 2 of the July 2024 Subject Matter Update, which was cited as an example of ineligible subject matter.1
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
(a) A person shall be entitled to a patent unless—
(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention
Claims 1-3, 17, 20, 24, 27 and 29-32 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kiraly (US 2018/0133578 A1).
Regarding claim 1, Kiraly discloses an article of manufacture comprising a plurality of instructions that are executable by one or more processors and resident on a non-transitory computer-readable storage medium (inherent in Fig. 2), wherein the instructions are configured upon execution to cause the one or more processors to perform a plurality of operations comprising interfacing with one or more cameras to receive image data about a scene (Fig. 7A), wherein image data comprises one or more images of the scene (Fig. 7A), wherein the one or more images comprise a plurality of pixels (¶ [0072]), the pixels having pixel coordinates in the one or more images (¶ [0072]), wherein the image data includes depictions of an alignment device and a target in the scene (Fig. 3B), translating a plurality of the pixel coordinates applicable to the alignment device to 3D coordinates in a frame of reference based on a spatial model of the scene (¶ [0072]: alignment stick is detected by two separate sensors to find the 3D line formed by the stick), determining an orientation of the alignment device relative to the frame of reference based on the translating and generating alignment data based on the determined alignment device orientation, wherein the generated alignment data is indicative of a relative alignment for the alignment device in the scene with respect to a golf shot for striking a golf ball toward the target (¶ [0007]: detect a horizontal edge within the image representative of the alignment stick by detecting large contrast changes, convert each edge to a vector that starts at the sensor's focal point and projects into space based on the sensor's calibration, locate the plane formed by the vectors by applying standard outlier removal and best fit analysis, determine the intersection of the plane and an earth tangential plane and calculate an azimuth alignment angle offset based on the line and the monitor's default alignment … the calculated azimuth alignment angle can then be used to adjust ball flight trajectory calculations), and generating feedback indicative of the alignment data for presentation to a user (¶ [0069]: launch monitor may illuminate the aligned indicator … notifying the user that the launch monitor is now aligned).
Regarding claim 2, Kiraly discloses identifying the alignment device in the image data in response to user input that identifies at least two points on the alignment device, and calculating the alignment device orientation based on the identified alignment device (¶ 0071]: processor analyzes the images from the sensor to detect the presence or absence of an alignment stick … detection of the alignment stick is aided by the following known conditions: the alignment stick is long and straight).
Regarding claim 3, Kiraly discloses automatically detecting the alignment device in the image data based on computer vision (¶ 0071]), and calculating the alignment device orientation based on the detected alignment device (¶ [0069]: launch monitor may illuminate the aligned indicator … notifying the user that the launch monitor is now aligned).
Regarding claim 17, Kiraly discloses determining a ground plane for the scene based on the image data, wherein the determined ground plane establishes the frame of reference (¶ [0007]: earth tangential plane).
Regarding claim 20, Kiraly discloses determining a location for the target relative to the frame of reference based on the image data (Fig. 6), determining an alignment line having the determined alignment device orientation, comparing the determined target location with the determined alignment line and generating the alignment data based on the comparison between the determined target location with the determined alignment line (Fig. 1).
Regarding claim 24, Kiraly discloses wherein the alignment device comprises an alignment stick placed on the ground (¶ 0060]).
Regarding claim 27, Kiraly discloses wherein the instructions define a mobile application for execution by a mobile device (¶ [0062]).
Claims 29 and 30 recite a method and system, respectively, comprising the same limitations as those in claim 1 above. They are accordingly rejected for the same reasons given supra.
Regarding claim 31, Kiraly discloses a mobile device, wherein the one or more processors is or are part of the mobile device (¶ [0062]).
Regarding claim 32, Kiraly discloses a launch monitor, wherein the one or more processors is or are part of the launch monitor (Abstract and Fig. 2).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. If this application names joint inventors, Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over Kiraly in view of Molinari et al (US 2011/0065530 A1).
Regarding claim 25, Molinari suggests—where Kiraly does not disclose—wherein the alignment device comprises a line on the golf ball (Fig. 1). It would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the invention to combine the disclosures of Kiraly and Molinari in order to make the system easier to set up and use.
Claim 26 is rejected under 35 U.S.C. 103 as being unpatentable over Kiraly in view of Russo (US 2020/0306611 A1).
Regarding claim 26, Russo suggests—where Kiraly does not disclose—wherein the alignment device comprises a plurality of alignment devices (Fig. 9). It would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the invention to combine the disclosures of Kiraly and Russo in order to make the system more useful by guiding both the golfer’s setup and swing plane.
Claim 28 is rejected under 35 U.S.C. 103 as being unpatentable over Kiraly in view of Maani et al (US 2018/0053308 A1).
Regarding claim 28, Manni suggests—where Kiraly does not disclose— wherein the one or more processors comprise a plurality of processors, and wherein the instructions include a plurality of instructions for execution by different ones of the processors (¶ [0019]). It would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the invention to combine the disclosures of Kiraly and Maani in order to make the system more efficient by allowing parallel execution of non-coupled tasks.
Claims 33 and 34 are rejected under 35 U.S.C. 103 as being unpatentable over Kiraly in view of Bose et al (US 2015/0318015 A1).
Regarding claim 33, Bose suggests—where Kiraly does not disclose— wherein the one or more processors is or are part of the VR equipment (¶ [0056]: golfer may also wear VR glasses that allow the golfer to see … distance to the hole). It would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the invention to combine the disclosures of Kiraly and Bose in order to make the system useful by supplying additional status information to the player.
Regarding claim 34, Bose suggests—where Kiraly does not disclose—wherein the one or more processors is or are part of the range finder (¶ [0056]: golfer may also wear VR glasses that allow the golfer to see … distance to the hole). It would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the invention to combine the disclosures of Kiraly and Bose in order to make the system useful by supplying additional status information to the player.
Allowable Subject Matter
Claims 16, 19 and 22 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Claims 4-15, 18, 21 and 23 are not subject to a prior art rejection, but remain rejected as ineligible subject matter under 35 USC § 101 as detailed supra.
The prior art considered pertinent to applicant's disclosure and not relied upon is made of record on the attached PTO-892 form.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEVE ROWLAND whose telephone number is (469) 295-9129. The examiner can normally be reached on M-Th 10-8. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Dmitry Suhol can be reached at (571) 272-4430. The fax number for the organization where this application or proceeding is assigned is (571) 273-8300.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
Applicant may choose, at his or her discretion, to correspond with Examiner via Internet e-mail. A paper copy of any and all email correspondence will be placed in the appropriate patent application file. Email communication must be authorized in advance. Without a written authorization by applicant in place, the USPTO will not respond via e-mail to any correspondence which contains information subject to the confidentiality requirement as set forth in 35 U.S.C. 122.
Authorization may be perfected by submitting, on a separate paper, the following (or similar) disclaimer:
PNG
media_image1.png
18
19
media_image1.png
Greyscale
Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with me concerning any subject matter of this application by electronic mail. I understand that a copy of these communications will be made of record in the application file.
PNG
media_image1.png
18
19
media_image1.png
Greyscale
See MPEP 502.03 for more information.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STEVE ROWLAND/Primary Examiner, Art Unit 3715
1 At https://www.uspto.gov/sites/default/files/documents/2024-AI-SMEUpdateExamples47-49.pdf