Prosecution Insights
Last updated: April 17, 2026
Application No. 17/728,770

APPARATUS, METHOD AND COMPUTER PROGRAM PRODUCT FOR GENERATING LOCATION INFORMATION OF AN OBJECT IN A SCENE

Final Rejection §102§103
Filed
Apr 25, 2022
Examiner
SHERRILLO, DYLAN JOSEPH
Art Unit
2665
Tech Center
2600 — Communications
Assignee
Sony Group Corporation
OA Round
4 (Final)
91%
Grant Probability
Favorable
5-6
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
39 granted / 43 resolved
+28.7% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
14 currently pending
Career history
57
Total Applications
across all art units

Statute-Specific Performance

§101
6.2%
-33.8% vs TC avg
§103
46.9%
+6.9% vs TC avg
§102
42.3%
+2.3% vs TC avg
§112
2.5%
-37.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 43 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 04/25/2022 and 11/18/2022 was filed after the mailing date of the non-final office action on 8/19/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Status of Claims Claim(s) 2 and 8 are cancelled. Claim(s) 1, 3-7, 9-12 and 14-16 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Ginsberg (US 20160212385 A1). Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Joynes (GB2403362B) as applied to claims 1 and 10-12 above, and further in view of Ginsberg (US 20160212385 A1). Response to Arguments Applicant's arguments filed 6/18/2025 have been fully considered but they are not persuasive. Applicant argues that Ginsberg does not disclose “utilizing the one or more properties of the object to constrain the location of the object within a region indicated by the predetermined predicted location information.”. Examiner respectfully disagrees. In the previous office action, the examiner cited paragraph 36 in relation to a 3D mapping module that determines the location of the balls identified in 3D space. The 3D mapping module creates a map used to create positions of reference to compare an object location to within an image “In various embodiments, the 3D mapping module first determines the location and orientation of the camera in 3D space based on the positions of the lines of the court (or playing field, etc.) in the images. The 3D mapping module 320 is pre-programmed with the position of lines and other markings on the court or field for the sport in question.”. The location of an object is in reference to the sport, field, or court wherein the 3D mapping module generates the space for an object to be observed. This is then further expanded upon in the previous citation of Paragraph 40 wherein the size and orientation of the ball is a property identified “The 3D mapping module 320 can then locate an object (e.g., a ball) on that line (and hence determine a precise location in 3D space) based on the apparent size of the object. For example, in one embodiment, the 3D mapping module 320 is pre-programmed with the dimensions of the ball. Therefore, by comparing the apparent size of the ball in the image with the known dimensions, the 3D mapping module 320 can determine the distance between the camera and the ball. In embodiments where the ball is non-symmetric (e.g., a football, puck, or shuttlecock), the 3D mapping module 320 first determines the current orientation of the ball based on its apparent shape in the image. Once the orientation has been determined, the 3D mapping module 320 compares the ball's apparent size with an expected size for that orientation to determine the distance between the camera and the ball.”. While a distance between the camera and the object is identified, a location of where the object is in the 3D mapped space is still identified and the distance is to see if an object is on a boundary line of the court/field. These factors are then used in the trajectory analysis module wherein an objects x and y coordinates are identified and predicted in Paragraph 41 “The trajectory analysis module 330 may work in 3D space or image space, with the 3D mapping module 320 later mapping the trajectory into 3D space as required. In one embodiment, the trajectory analysis module 330 calculates the trajectory of the ball assuming that the only force acting on it is gravity (i.e., ignoring factors such as ball spin, air resistance, and wind). Thus, the trajectory is a parabola and can be completely determined from six variables: the initial three-dimensional position and velocity vectors. Given n images from times t.sub.1 through t.sub.n, the trajectory analysis module 330 has n points, with each point including an apparent ball radius, and a two-dimensional (e.g., x and y coordinates) ball center location, C.sub.1.” Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 3-12, and 14-16 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Ginsberg (US 20160212385 A1). Regarding claim 1, Ginsberg teaches: an apparatus for generating location information of an object in a scene (Figure 2, #110 and #120, an apparatus for generating location information of object #250) the apparatus comprising circuitry configured to: (Paragraph 34, in line 9-13, “The local search (implemented by the local search module 314) limits its search to a region surrounding the projected location of a ball in the current frame, based on the appearance of that (possibly moving) ball in the previous frame or frames.”) acquire image data of a scene from an image capture device (In paragraph 19, in line 1-2, “The recording device 110 captures images that include a ball.”); acquire predicted location information of an object in the scene indicative of a region of the scene in which the object is predicted to be located at a given time (In paragraph 56, in lines 1-6, “Based on the projected trajectory, the data processing device 120 predicts 630 a sporting outcome. In an embodiment where the projected trajectory includes multiple possible trajectories and corresponding probabilities, the data processing device divides the possible trajectories into groups that correspond to different sporting outcomes.”); detect one or more properties of the object from the image data, the properties of the object indicative of an observed location of the object in the scene (In paragraph 54, in lines 8-15, “The second effect introduces an additional force into the calculations performed by the trajectory analysis module 330. In one embodiment, the trajectory analysis module 330 assumes the spin on the ball is constant and treats it as another variable to be used in fitting a trajectory to the observed data. It uses modified equations of motion that include a spin term, which is an additional acceleration vector orthogonal to both the spin vector and the direction of motion.”); and generate location information of the object in the scene using the predicted location information and the one of more properties of the object (In paragraph 47, in line 1-8, “The second effect introduces an additional force into the calculations performed by the trajectory analysis module 330. In one embodiment, the trajectory analysis module 330 assumes the spin on the ball is constant and treats it as another variable to be used in fitting a trajectory to the observed data. It uses modified equations of motion that include a spin term, which is an additional acceleration vector orthogonal to both the spin vector and the direction of motion.”). Regarding claim 3: Ginsberg teaches: the apparatus according to claim 1, wherein the image data comprises a plurality of images (Paragraph 17, in lines 1-3, “In one embodiment, a data processing device receives a plurality of digital images, each image including a ball, and identifies the position of the ball in each image. The data processing device also projects the trajectory of the ball based on the positions of the ball identified in the images.“) of the scene and wherein the circuitry is further configured to generate location information of the object in the scene using one or more properties of the image detected from each of the plurality of images (Paragraph 54, in lines 8-15, “The appropriate balance of prediction accuracy and data processing time in any given scenario depends on numerous factors, including the specific sport, the nature of the advisory information, the processing power available, and the preference of the individual receiving the advisory information. In one embodiment, the locations include x and y coordinates, and an apparent ball radius, which is used as a proxy for a z coordinate, as described previously.”). Regarding claim 4: Ginsberg teaches: the apparatus according to claim 3, wherein the location information generated by the circuitry using the predicted location information and the one or more properties of the object comprises a path of the object through the scene (Paragraph 47, in lines 1-8, “The second effect introduces an additional force into the calculations performed by the trajectory analysis module 330. In one embodiment, the trajectory analysis module 330 assumes the spin on the ball is constant and treats it as another variable to be used in fitting a trajectory to the observed data. It uses modified equations of motion that include a spin term, which is an additional acceleration vector orthogonal to both the spin vector and the direction of motion.”). Regarding claim 5: Ginsberg teaches: the apparatus according to claim 4, wherein the circuitry is further configured to generate additional information regarding the object using the path of the object through the scene (Figure 6, #630 and #640, additional information given based on trajectory in the form of outcome and a notification). Regarding claim 6: Ginsberg teaches: the apparatus according to claim 5, wherein the additional information comprises information including at least one of a speed of the object (Paragraph 44, in line 4-7, “In one embodiment, the trajectory analysis module 330 fits several parabolas using slightly different values for the initial position and velocity variables and the corresponding error for each”.), a spin of the object (Paragraph 47, in line 1-8, “The second effect introduces an additional force into the calculations performed by the trajectory analysis module 330. In one embodiment, the trajectory analysis module 330 assumes the spin on the ball is constant and treats it as another variable to be used in fitting a trajectory to the observed data. It uses modified equations of motion that include a spin term, which is an additional acceleration vector orthogonal to both the spin vector and the direction of motion.”) and/or a location of the object at a future time (Paragraph 20, in lines 1-3, “The data processing device 120 processes images received from the recording device 110 and predicts the trajectory of the ball.”). Regarding claim 7: Ginsberg teaches: the apparatus according to claim 1, wherein the one or more properties of the object comprise at least one of a two dimensional location of the object in the image data (Paragraph 41, in lines 12-14, “Given n images from times t.sub.1 through t.sub.n, the trajectory analysis module 330 has n points, with each point including an apparent ball radius, and a two-dimensional (e.g., x and y coordinates) ball center location,”), a size of the object in the image data, a relative position of the object to a second object at a predetermined location in the scene and a relative location of the object and a shadow formed by the object on a surface. Regarding claim 9: Ginsberg teaches: the apparatus according to claim 8, wherein the default trajectory of the object through the scene is characterised by one or more trajectory coefficients and wherein generating the location information of the object in the scene comprises adjusting the values of the one or more trajectory coefficients from their initial values using the one or more properties of the object which have been detected (Paragraph 42, from line 7 of page 10 to line 3 of page 11, “In one embodiment, the trajectory analysis module 330 calculates the error of the fitted parabola with the equation: e  p,v = ∑ [ c i - C i 2 -  r i - R i 2 ] where e (p, v) is the error in the fitted parabola, c i - C i is the difference between the predicted and observed ball center location for image n i , and r i - R i is the difference between the predicted and observed ball radii for image in other embodiments, the contributions to the total error of the ball center position terms and the ball radii terms are weighted differently.”). Regarding claim 10: Ginsberg teaches: the apparatus according to claim 1, wherein the circuitry is further configured to acquire different predetermined predicted location information of the object depending on a type of the object (Paragraph 43, in line 1-9, , “FIG. 4A illustrates a parabola 410 fitted by the trajectory analysis module 330 using three data points 412, 414, and 416 corresponding to three observed positions of the ball 250. The trajectory analysis module 330 also considered the observed radius of the ball corresponding to each data point in fitting the parabola 410. Thus, even though a slightly different parabola could be used that would pass exactly through every data point in this view, the total error is minimized by having the parabola 410 pass close by point 416.”). Regarding claim 11: Ginsberg teaches: the apparatus according to claim 10, wherein the object is a ball (Claim 11, “The system of claim 1, wherein the ball is one of: a volleyball, a tennis ball, a basketball, a hockey puck, a shuttlecock, a football, a cricket ball, a golf ball, or a soccer ball.”) and wherein the ball is bowled by a player and wherein the circuitry is further configured to acquire different predetermined predicted location information depending on the player who bowled the ball (Paragraph 43, in line 1-9, “FIG. 4A illustrates a parabola 410 fitted by the trajectory analysis module 330 using three data points 412, 414, and 416 corresponding to three observed positions of the ball 250. The trajectory analysis module 330 also considered the observed radius of the ball corresponding to each data point in fitting the parabola 410. Thus, even though a slightly different parabola could be used that would pass exactly through every data point in this view, the total error is minimized by having the parabola 410 pass close by point 416.”). Regarding claim 12: Ginsberg teaches: the apparatus according to claim 11, wherein the apparatus is further configured to generate the location information of the object in the scene using the predetermined predicted location information, the one or more properties of the object and at least one of an initial location from where the ball (Claim 11, “The system of claim 1, wherein the ball is one of: a volleyball, a tennis ball, a basketball, a hockey puck, a shuttlecock, a football, a cricket ball, a golf ball, or a soccer ball.”) is bowled, an initial speed of the ball (“In one embodiment, the trajectory analysis module 330 fits several parabolas using slightly different values for the initial position and velocity variables and the corresponding error for each”) when the ball is bowled and/or a time at which the ball contacts the ground. Regarding claim 14: Ginsberg teaches: the apparatus according to claim 1, wherein the apparatus is configured to be calibrated using the position of one or more predetermined features in an initial image of the scene (Paragraph 26, in lines 1-6, “The image analysis module 310 constructs the first image by comparing the color of each pixel in the original video frame to an expected color of the ball. The pixels in the first image are set as on or off depending on whether the corresponding pixel in the original video frame matches the expected color of the ball within a threshold tolerance.”). Regarding claim 15: Ginsberg teaches: a method of generating location information of an object in a scene, the method comprising (In paragraph 12, in lines 1-2, “FIG. 7 is a flow-chart illustrating a method for identifying a ball in an image, according to one embodiment.”). The rest of the embodiments of claim 15 are rejected on the same ground as claim 1. Regarding claim 16: Ginsberg teaches: a non-transitory computer readable medium storing a computer program product comprising instructions which, when the instructions are implemented by a computer, cause the computer to perform a method of generating location information of an object in a scene, the method comprising (In paragraph 51, in lines 1-6, “ In the embodiment shown in FIG. 5, the storage device 508 is a non-transitory computer-readable storage medium such as a hard drive, compact disk read-only memory (CD-ROM), DVD, or a solid-state memory device. The memory 506 holds instructions and data used by the processor 502.”). The rest of the embodiments of claim 15 are rejected on the same ground as claim 1. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Joynes (GB2403362B) as applied to claims 1 and 10-12 above, and further in view of Ginsberg (US 20160212385 A1). Regarding Claim 13: Ginsberg teaches the embodiments of claim 12 as applied above. Ginsberg does not explicitly teach the following; however in related art, Joynes teaches: wherein the circuitry is further configured to acquire audio to determine the time at which the ball contacts the ground (Page 11, in line 8-13, “Such microphones pick up the sound of an impact event between any of the above objects with a delay, which depends principally on the distance between the microphone and the impact event location, and the effective speed of sound, which will depend at least on meteorological factors such as wind speed and the direction, presence of rain etc.”). Joynes and Ginsberg are considered to be analogous to the claimed invention because both are in the same field of objection detection within a location. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus to incorporate a microphone that can acquire audio when the ball contacts the ground to the apparatus that detects and object within a location. This provides the predictable result of an apparatus that can track an object moving through a location with more information gathered by the microphone. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DYLAN J SHERRILLO whose telephone number is (703)756-5605. The examiner can normally be reached 1st Week of Bi-week Monday - Thursday 10am - 7:30pm EST, 2nd Week of Bi-week Monday-Thursday 10am - 7:30pm EST Friday 10am-6:30pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /D.J.S./Examiner, Art Unit 2665 /Stephen R Koziol/Supervisory Patent Examiner, Art Unit 2665
Read full office action

Prosecution Timeline

Apr 25, 2022
Application Filed
Aug 21, 2024
Non-Final Rejection — §102, §103
Nov 06, 2024
Response Filed
Dec 23, 2024
Final Rejection — §102, §103
Feb 28, 2025
Response after Non-Final Action
Mar 28, 2025
Request for Continued Examination
Mar 31, 2025
Response after Non-Final Action
Apr 16, 2025
Non-Final Rejection — §102, §103
Jun 18, 2025
Response Filed
Sep 05, 2025
Final Rejection — §102, §103
Apr 04, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591907
SYSTEM AND METHOD TO DETECT A GAZE AT AN OBJECT BY UTILIZING AN IMAGE SENSOR
2y 5m to grant Granted Mar 31, 2026
Patent 12579798
IMAGE PROCESSING METHOD AND APPARATUS
2y 5m to grant Granted Mar 17, 2026
Patent 12567166
DEVICE FOR PROCESSING IMAGE AND OPERATING METHOD THEREOF
2y 5m to grant Granted Mar 03, 2026
Patent 12541825
MODEL TRAINING METHOD, IMAGE PROCESSING METHOD, COMPUTING AND PROCESSING DEVICE AND NON-TRANSIENT COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Feb 03, 2026
Patent 12530826
CORRECTION OF ARTIFACTS OF TOMOGRAPHIC RECONSTRUCTIONS BY NEURON NETWORKS
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
91%
Grant Probability
99%
With Interview (+11.8%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 43 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month