Prosecution Insights
Last updated: April 19, 2026
Application No. 18/212,480

Perspective Correction of User Input Objects

Non-Final OA §101§102§103
Filed
Jun 21, 2023
Examiner
YENTRAPATI, AVINASH
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
69%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
499 granted / 671 resolved
+12.4% vs TC avg
Minimal -5% lift
Without
With
+-5.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
27 currently pending
Career history
698
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
52.0%
+12.0% vs TC avg
§102
23.9%
-16.1% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 671 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claim 1 recites “transforming the camera set of two-dimensional coordinates into a display set of two- dimensional coordinates based on the depth information of the physical environment” which falls under the grouping of Mathematical relationship because the transformation of coordinates involves a mathematical operation. The limitations “obtaining a camera set of two-dimensional coordinates of a user input object in a physical environment and obtaining depth information of the physical environment, wherein the depth information includes a depth value for the camera set of two-dimensional coordinates that is different than a depth to the user input object” merely recite data gathering steps, which are insignificant extra-solution activity. The claim does not recite additional limitations that would integrate the abstract idea into a practical application nor does it involve an inventive concept. Dependent claim 2 recites “wherein the user input object includes at least a portion of a hand of a user” which merely describes the object in the data that is gathered. Dependent claim 3 recites “wherein the user input object includes a handheld device” which merely describes the object in the data that is gathered. Dependent claim 4 recites “wherein obtaining the camera set of two-dimensional coordinates includes obtaining a physical set of three-dimensional coordinates of the user input object and projecting the physical set of three-dimensional coordinates to a camera image plane” which is merely a data gathering step, which is an insignificant extra-solution activity. Dependent claim 5 recites “wherein obtaining the camera set of two- dimensional coordinates includes detecting the user input object in an image of the physical environment”, where the obtaining step is merely data gathering step. The detecting step falls under the grouping of Mental Processes because a person can visually inspect an image and identify the user input object in the image. Dependent claim 6 recites “wherein the depth information of the physical environment is a smoothed depth map” which merely describes that data that is gathered. Dependent claim 7 recites “wherein the depth information of the physical environment is a clamped depth map” which merely describes that data that is gathered. Dependent claim 8 recites “wherein the depth information of the physical environment is a static depth map” which merely describes that data that is gathered. Dependent claim 9 recites “wherein the depth value for the camera set of two- dimensional coordinates represents a depth to a static object behind the user input object” which merely describes the data that is gathered. Dependent claim 10 recites “wherein obtaining the depth information of the physical environment includes determining the depth value for the camera set of two- dimensional coordinates via interpolation using depth values of locations surrounding the camera set of two-dimensional coordinates”. The obtaining step is merely data gathering step which is an insignificant extra solution activity. The interpolation step falls under the grouping of Mathematical concepts because interpolation involves mathematical operation. Dependent claim 11 recites “wherein obtaining the depth information of the physical environment includes determining the depth value for the camera set of two- dimensional coordinates at a time the user input object was not at the camera set of two- dimensional coordinates”. The obtaining step is merely data gathering step which is an insignificant extra solution activity. The step of determining the depth value falls under the grouping of Mental Processes because a person can visually inspect an image and estimate the depth value. Dependent claim 12 recites “wherein obtaining the depth information of the physical environment includes determining the depth value for the camera set of two- dimensional coordinates based on a three-dimensional model of the physical environment excluding the user input object.” The obtaining step is merely data gathering step which is an insignificant extra solution activity. The step of determining the depth value falls under the grouping of Mental Processes because a person can visually inspect an image or a 3D model to estimate the depth value. Dependent claim 13 recites “further comprising determining an input set of three- dimensional coordinates of the user input object by triangulating the display set of two- dimensional coordinates and a second display set of two-dimensional coordinates” which falls under the grouping of Mathematical concepts because the triangulation involves mathematical operations. Dependent claim 14 recites “further comprising determining a user input according to the input set of three-dimensional coordinates” falls under the grouping of Mathematical processes because it is determined based on triangulation. Dependent claim 15 recites “further comprising changing display of virtual content in response to the user input” which is merely a data output step which is an insignificant extra solution activity. Dependent claim 16 recite “further comprising displaying virtual content at the display set of two-dimensional coordinates” which is merely a data output step which is an insignificant extra solution activity. Dependent claim 17 recites “transforming an image of the environment based on the depth information of the physical environment” which falls under the grouping of Mathematical concepts because the transformation involves mathematical operations. Claim 18 recites “transform the camera set of two-dimensional coordinates into a display set of two-dimensional coordinates based on the depth information of the physical environment excluding the user input object” which falls under the grouping of Mathematical concepts because the transforming steps involves mathematical operations. The claim further recites “determining an input set of three-dimensional coordinates of the user input object by triangulating the display set of two-dimensional coordinates and a second display set of two-dimensional coordinates” which also falls under the abstract idea grouping of Mathematical concepts because the triangulation involves mathematical operations. The claim further recites “obtain a camera set of two-dimensional coordinates of a user input object in a physical environment; obtain depth information of the physical environment” which are merely data gathering steps which are insignificant extra solution activity. The claim does not recite additional limitations that would integrate the abstract idea into a practical application nor does it involve an inventive concept. Dependent claim 19 recites “further comprising determining a user input according to the input set of three-dimensional coordinates” falls under the grouping of Mathematical processes because it is determined based on triangulation. With regard to claim 20, see discussion of claim 1. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-5, 8-9, 11, 14-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by D11. With regard to claim 1, D1 teach at a device having one or more processors and non-transitory memory (see abstract: processor and memory inherent); obtaining a camera set of two-dimensional coordinates of a user input object in a physical environment (see fig. 3, p. 3720 last ¶: system tracks the user’s fingers using a worn glove and the OptiTrack system); obtaining depth information of the physical environment, wherein the depth information includes a depth value for the camera set of two-dimensional coordinates that is different than a depth to the user input object (see fig. 3, p. 3720 last ¶: depth camera is used at a calibration stage to measure the physical proxy location); and transforming the camera set of two-dimensional coordinates into a display set of two- dimensional coordinates based on the depth information of the physical environment (see fig. 4; p. 3 7 21: fig 4 illustrates how Haptic Retargeting redirects the user's hand to a physical location different than the virtual target location. The virtual hand (skin color) and the physical hand (blue color) are initially at the same position. As the users' hand moves towards to the virtual target, the virtual hand position is gradually deviating in a direction away from the haptic proxy, causing the user to correct the trajectory toward the virtual target. Thus, the physical hand is gradually moved toward the haptic proxy. The virtual hand (skin color) shifts from its true physical location (semi-transparent blue color), inducing the user to correct the hand movement to touch the physical prop). With regard to claim 2, D1 teach method of claim 1, wherein the user input object includes at least a portion of a hand of a user (see fig. 4). With regard to claim 3, D1 teach method of claim 1, wherein the user input object includes a handheld device (see p. 3725 col 2 ¶ 2: hand held controllers). With regard to claim 4, D1 teach method of claim 1, wherein obtaining the camera set of two-dimensional coordinates includes obtaining a physical set of three-dimensional coordinates of the user input object and projecting the physical set of three-dimensional coordinates to a camera image plane (see p. 3720 last ¶: system tracks the user’s fingers using a worn glove and the OptiTrack system). With regard to claim 5, D1 teach method of claim 1, wherein obtaining the camera set of two- dimensional coordinates includes detecting the user input object in an image of the physical environment (see fig. 3; p. 3720 last ¶: depth camera is used at a calibration stage to measure the physical proxy location). With regard to claim 8, D1 teach method of claim 1, wherein the depth information of the physical environment is a static depth map (see fig. 3, p. 3720 last ¶: depth camera is used at a calibration stage to measure the physical proxy location – implicit that the depth map is static). With regard to claim 9, D1 teach method of claim 1, wherein the depth value for the camera set of two- dimensional coordinates represents a depth to a static object behind the user input object (see fig. 4). With regard to claim 11, D1 teach method of claim 1, wherein obtaining the depth information of the physical environment includes determining the depth value for the camera set of two- dimensional coordinates at a time the user input object was not at the camera set of two- dimensional coordinates (see fig. 3, p. 3720 last ¶: depth camera is used at a calibration stage to measure the physical proxy location – implicit at the time of calibration stage). With regard to claim 14, D1 teach method of claim 13, further comprising determining a user input according to the input set of three-dimensional coordinates (see fig. 1). With regard to claim 15, D1 teach method of claim 14, further comprising changing display of virtual content in response to the user input (see fig. 2, p. 3724 ¶ 8: operations can simulate dials, sliders, touch screen etc., such as the dial on the safe in our room scene in figure 2). With regard to claim 16, D1 teach method of claim 1, further comprising displaying virtual content at the display set of two-dimensional coordinates (see fig. 4). With regard to claim 17, D1 teach method of claim 1, further comprising: transforming an image of the environment based on the depth information of the physical environment; and displaying the transformed image (see fig. 4). With regard to claim 18, see discussion of claim 1. With regard to claim 19, see discussion of claim 14. With regard to claim 20, see discussion of claim 1. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 6-7, 10, 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over D1. With regard to claim 6, D1 teach method of claim 1, but fails to explicitly teach wherein the depth information of the physical environment is a smoothed depth map. However, Examiner takes Official Notice to the fact that generating smoothed depth map is well known in the art before the effective filing date and that one skilled in the art would have been motivated to incorporate known teachings into the configuration of D1 yielding predictable and enhanced results by reducing artifacts. With regard to claim 7, D1 teach method of claim 1, but fails to explicitly teach wherein the depth information of the physical environment is a clamped depth map. However, Examiner takes Official Notice to the fact that generating clamped depth map is well known in the art before the effective filing date and that one skilled in the art would have been motivated to incorporate known teachings into the configuration of D1 yielding predictable and enhanced results by reducing artifacts. With regard to claim 10, D1 teach method of claim 1, but fails to explicitly teach wherein obtaining the depth information of the physical environment includes determining the depth value for the camera set of two- dimensional coordinates via interpolation using depth values of locations surrounding the camera set of two-dimensional coordinates. However, Examiner takes Official Notice to the fact that interpolation of depth values is extremely well known in the art before the effective filing date and that one skilled in the art would have been motivated to incorporate known teachings into the configuration of D1 yielding predictable and enhanced depth values by using interpolation for missing depth values. With regard to claim 12, D1 teach method of claim 1, but fails to explicitly teach wherein obtaining the depth information of the physical environment includes determining the depth value for the camera set of two- dimensional coordinates based on a three-dimensional model of the physical environment excluding the user input object. However, Examiner takes Official Notice to the fact that it is well known in the art to use 3D model of the environment to determine depth or other values and one skilled in the art would have found it obvious before the effective filing date to incorporate known teachings into the configuration of D1 yielding predictable and enhanced depth results. With regard to claim 13, D1 teach method of claim 1, but fails to explicitly teach further comprising determining an input set of three- dimensional coordinates of the user input object by triangulating the display set of two- dimensional coordinates and a second display set of two-dimensional coordinates, however Examiner takes Official Notice to the fact that it is well known in the art to triangulate 3D coordinates of an object based on left and right images such as from a stereoscopic sensor. One skilled in the art before the effective filing date would have found it obvious to incorporate known teachings into the configuration of D1 yielding predictable and enhanced 3D estimation based on left and right images. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AVINASH YENTRAPATI whose telephone number is (571)270-7982. The examiner can normally be reached on 8AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached on (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AVINASH YENTRAPATI/Primary Examiner, Art Unit 2672 1 Cheng, Lung-Pan, et al. "Sparse haptic proxy: Touch feedback in virtual environments using a general passive prop." Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2017.
Read full office action

Prosecution Timeline

Jun 21, 2023
Application Filed
May 15, 2024
Response after Non-Final Action
Dec 13, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602803
HEAD-MOUNTED DISPLAY AND METHOD FOR DEPTH PREDICTION
2y 5m to grant Granted Apr 14, 2026
Patent 12579791
AUTOMATED METHODS FOR GENERATING LABELED BENCHMARK DATA SET OF GEOLOGICAL THIN-SECTION IMAGES FOR MACHINE LEARNING AND GEOSPATIAL ANALYSIS
2y 5m to grant Granted Mar 17, 2026
Patent 12562264
METHOD FOR THE RECOMPOSITION OF A KIT OF SURGICAL INSTRUMENTS AND CORRESPONDING APPARATUS
2y 5m to grant Granted Feb 24, 2026
Patent 12536646
STRUCTURE DAMAGE CAUSE ESTIMATION SYSTEM, STRUCTURE DAMAGE CAUSE ESTIMATION METHOD, AND STRUCTURE DAMAGE CAUSE ESTIMATION SERVER
2y 5m to grant Granted Jan 27, 2026
Patent 12536654
THE SYSTEM AND METHOD FOR STOOL IMAGE ANALYSIS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
69%
With Interview (-5.0%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 671 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month