Prosecution Insights
Last updated: April 19, 2026
Application No. 18/691,424

EXPANDED FIELD OF VIEW USING MULTIPLE CAMERAS

Non-Final OA §103§DP
Filed
Mar 12, 2024
Examiner
LIEW, ALEX KOK SOON
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
95%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
957 granted / 1094 resolved
+25.5% vs TC avg
Moderate +7% lift
Without
With
+7.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
18 currently pending
Career history
1112
Total Applications
across all art units

Statute-Specific Performance

§101
8.6%
-31.4% vs TC avg
§103
44.7%
+4.7% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
3.0%
-37.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1094 resolved cases

Office Action

§103 §DP
DETAILED ACTION [1] Remarks I. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . II. Claims 1-20 are pending and have been examined, where claims 1, 5-13, and 16-19 is/are rejected, claim 2-4 and 14-15 is/are objected and claim 20 is allowed. Explanations will be provided below. III. Inventor and/or assignee search were performed and determined no double patenting rejection(s) is/are necessary. IV. Patent eligibility (updated in 2019) shown by the following: Claims 1-20 pass patent eligibility test because there is/are no limitation or a combination of limitations amounting to an abstract idea. Also, the following limitation or the combinations of the limitations: “determine depth information for the one or more first images; extend the depth information outward from one or more edges of the one or more first images to generate an expanded region; and reproject pixel data from the one or more second images into the expanded region to generate an expanded field of view image of a scene in the environment” effects a transformation or a reduction of a particular article to a different state or thing / adds a specific limitation(s) other than what is well-understood, routine and conventional in the field, or adding unconventional steps that confine the claim to a particular useful application and providing improvements to the technical field of depth measurements, which recite additional elements that integrate the judicial exception into a practical application and amounting significant more. V. The PCT application, PCT/US2022/044460, is considered and the examiner determined no reference prior art are relevant to the claims of the current application. [2] Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. Use of the word “means” (or “step for”) in a claim with functional language creates a rebuttable presumption that the claim element is to be treated in accordance with 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) is invoked is rebutted when the function is recited with sufficient structure, material, or acts within the claim itself to entirely perform the recited function. Absence of the word “means” (or “step for”) in a claim creates a rebuttable presumption that the claim element is not to be treated in accordance with 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) is not invoked is rebutted when the claim element recites function but fails to recite sufficiently definite structure, material or acts to perform that function. Claim elements in this application that use the word “means” (or “step for”) are presumed to invoke 35 U.S.C. 112(f) except as otherwise indicated in an Office action. Similarly, claim elements that do not use the word “means” (or “step for”) are presumed not to invoke 35 U.S.C. 112(f) except as otherwise indicated in an Office action. Claim(s) 1-13 and 20 are not interpreted under 35 U.S.C. 112(f) or pre-AIA U.S.C. 112 6th paragraph because of the following reason(s): limitations are modified by sufficient structure or material for performing the claimed function. Claim(s) 14-19 does not require 35 U.S.C. 112(f) or pre-AIA U.S.C. 112 6th paragraph interpretation because they are method claims and / or they are CRM claims. Upon examination of the specification and claims, the examiner has determined, under the best understanding of the scope of the claim(s), rejection(s) under 35 U.S.C. 112(a)/(b) is not necessitated because of the following reasons: sufficient support are provided in the written description / drawings of the invention. [3] Grounds of Rejection Claim Rejections - 35 USC § 103 1. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 2. Claims 1, 5, 7, 9-13, 16, and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ebrahimi Afrouzi (US 11393114) in view of BLASCO CLARET (US 20200134849). Regarding claim 1, Ebrahimi Afrouzi discloses a system, comprising: one or more primary cameras configured to capture one or more first images of an environment (see figure 1, 101 is read as the first camera); one or more secondary cameras configured to capture one or more second images of the environment from different viewpoints than the one or more primary cameras (see figure 1, 105, which is taken from a different point of view); and one or more processors configured to: determine depth information for the one or more first images (see column 2, lines 52-57, capturing data by one or more sensors of one or more vehicles moving within the working environment, the data being indicative of depth within the working environment from respective sensors of the one or more vehicles to objects in the working environment at a plurality of different sensor poses); reproject pixel data from the one or more second images into the expanded region to generate an expanded field of view (FoV) image of a scene in the environment (see column 22, lines 59-65, the processor uses the raw pixel intensity values to determine the area of overlap between data captured within overlapping fields of view to combine data and construct a map of the environment, where overlapping images, the area in which the two images overlap contain similar arrangement of pixel intensities in at least a portion of the digital image). Ebrahimi Afrouzi is silent in disclosing extend the depth information outward from one or more edges of the one or more first images to generate an expanded region. BLASCO CLARET discloses extend the depth information outward from one or more edges of the one or more first images to generate an expanded region (see figure 16B, 1622, compute slopes of epipolar lines, extend epipolar lines if baselines are large enough, where the epipolar lines are read as the edges, where the images are recorded in consecutive frames reading on the first images): PNG media_image1.png 492 785 media_image1.png Greyscale . It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to include extend the depth information outward from one or more edges of the one or more first images to generate an expanded region in order to generate new areas beyond the original image boundaries, which extends the depth information outward allows the system to predict the structures of these unseen regions, enabling smooth camera motion and preventing abrupt visual artifacts at the image borders. Regarding claim 5, Ebrahimi Afrouzi discloses the system as recited in claim 1, wherein the one or more processors are configured to undistort the one or more second images prior to said reproject (see figure 1A, 114 is the reproject image, the images acquired by 101 and 105 are undistort). Regarding claim 7, Ebrahimi Afrouzi discloses the system as recited in claim 1, wherein the one or more primary cameras include two front-facing cameras on a device that provide stereo images of the scene, and wherein the one or more secondary cameras include at least one camera on at least two sides of the device (see figure 1A, 101 and 105 are the two front first and second cameras, respectively, also see column 4, lines 61-64, stereo cameras are employed, where 105 on the side of the device). Regarding claim 9, Ebrahimi Afrouzi discloses the system as recited in claim 7, wherein the cameras on the at least two sides of the device include wider FoV cameras than the two front-facing cameras (see figure 2A, 200 and 202 are on the left and right side, respectively). Regarding claim 10, Ebrahimi Afrouzi discloses the system as recited in claim 7, wherein the images captured by at least one camera on a first side of the device are used to extend the FoV of a first one of the two front-facing cameras, and wherein the images captured by at least one camera on a second side of the device are used to extend the FoV of a second one of the two front-facing cameras (see figure 1A, the first image capture by 101 is extend with the image capture by 105). Regarding claim 11, Ebrahimi Afrouzi discloses the system as recited in claim 1, wherein the depth information is sparse depth information that provides depth for edges in a scene captured by the one or more primary cameras (see column 6, lines 20-25, include applying an edge detection algorithm (like Haar or Canny) to readings from the different fields of view and aligning edges in the resulting transformed outputs). Regarding claim 12, Ebrahimi Afrouzi discloses the system as recited in claim 1, wherein the one or more secondary cameras include grayscale cameras, and wherein the one or more processors are further configured to extend color from images captured by the one or more primary colors into the expanded region (see column 6, lines 14-18, if the processor compares the color depth of two images and they are both observed to have the greatest rates of change in similar locations, the processor hypothesizes that the two images have overlapping data points, the camera are color depth cameras, where the extended region are also color regions). Regarding claim 13, see the rationale and rejection for claim 1. In addition, see figure 4, 400 showing the processor. Regarding claim 16, see the rationale and rejection for claim 5. Regarding claim 18, see the rationale and rejection for claim 7. Regarding claim 19, Ebrahimi Afrouzi discloses the method as recited in claim 18, wherein reprojecting pixel data from the one or more second images into the expanded region to generate an expanded field of view (FoV) image of a scene in the environment comprises: reprojecting pixel data from the images captured by at least one camera on a first side of the device into an expanded region around an image captured by a first one of the two front-facing cameras (see figure 2A, 201 and 202 read as second set of cameras); and reprojecting pixel data from the images captured by at least one camera on a second side of the device into an expanded region around an image captured by a second one of the two front-facing cameras (see figure 2A, 200 and 201 read as second set of cameras): PNG media_image2.png 706 982 media_image2.png Greyscale . 3. Claims 6 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ebrahimi Afrouzi (US 11393114) in view of BLASCO CLARET (US 20200134849) and Nanri (US 20130114887). Regarding claim 6, Ebrahimi Afrouzi and BLASCO CLARET disclose all the limitations of claim 1, but is silent in disclosing the system as recited in claim 1, wherein the one or more processors are configured to blur the extended region. Nanri discloses the system as recited in claim 1, wherein the one or more processors are configured to blur the extended region (see figure 1, 102 the second stereo image is the extended region which is blur). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to include blurring the peripheral or extended region helps direct the user's or system's attention to the central the area of interest, where this is to train the eye to focus on specific objects or areas, mimicking how human eyes focus on a subject with a shallow depth of field. Regarding claim 17, see the rationale and rejection for claim 6. 4. Claims 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ebrahimi Afrouzi (US 11393114) in view of BLASCO CLARET (US 20200134849) and Geisner (US 20120206452). Regarding claim 8, Ebrahimi Afrouzi and BLASCO CLARET disclose all the limitations of claim 1, but is silent in disclosing the system as recited in claim 7, wherein the device is a head-mounted device (HMD), and wherein the one or more processors are configured to provide the expanded FoV image to a display panel of the HMD for display to a user. Geisner discloses the system as recited in claim 7, wherein the device is a head-mounted device (HMD), and wherein the one or more processors are configured to provide the expanded FoV image to a display panel of the HMD for display to a user (see figure 1A, the HMD shown combines the images acquired by first and second cameras on the HMD, also see paragraph 42, one or more cameras capture image data, of a field of view of the display of a display device system). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to include the device is a head-mounted device (HMD), and wherein the one or more processors are configured to provide the expanded FoV image to a display panel of the HMD for display to a user in order to creating a sense of immersion that is not possible with traditional, flat screens, where these are displayed in real-time matching the imagery to the user's head movements, which makes the virtual experience feel realistic and natural. [4] Claim Objections Claim(s) 2-4 and 14-15 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. With regards to claim 2, the examiner cannot find any applicable prior art providing teachings for the following limitation(s): “the system as recited in claim 1, wherein, to extend the depth information outward from one or more edges of the one or more first images to generate an expanded region, the one or more processors are configured to: extend the depth information outward from the one or more edges of the one or more first images for a first distance to generate a second layer, wherein the one or more first images are a first layer; and extend a median depth determined from the depth information outward from one or more edges of the second layer for a second distance to generate a second layer” in combination with the rest of the limitations of claim 1. BLASCO CLARET discloses the system as recited in claim 1, wherein, to extend the depth information outward from one or more edges of the one or more first images to generate an expanded region, the one or more processors are configured to: extend the depth information outward from the one or more edges of the one or more first images for a first distance to generate a second layer, wherein the one or more first images are a first layer (see figure 6B illustration below): PNG media_image3.png 490 932 media_image3.png Greyscale . BLASCO CLARET is silent in disclosing extend a median depth determined from the depth information outward from one or more edges of the second layer for a second distance to generate a second layer. Regarding claim 14 see the rationale for claim 2. Claim(s) 3-4, and 15 is/are objected as well because it is dependent on a claim with allowable subject matter. Shenoy (US 20150296319) discloses depth to the direction of arrival (azimuth) for each column of image pixel or regions can be performed using a mean or median depth value for the column group (see paragraph 108) but does not disclose a median depth determined from the depth information outward from one or more edges of the second layer for a second distance to generate a second layer. [5] Allowable Claim Claim 20 is allowable. See the rejection for claim 1 and see the objection rationales for claim 2. Ebrahimi Afrouzi discloses a device, comprising: two front-facing cameras configured to capture stereo images of a scene in an environment (see figure 2A, 200 and 201 are two front-facing cameras); at least one camera on at least two sides of the device configured to capture additional images of the scene (see figure 2A, cameras are placed on the right side and left side); and one or more processors configured to render expanded field of view (FoV) stereo images of the scene, wherein, to render the expanded FoV stereo images (see figure 4, 400 is the processor, see figure 2A, 212 is the expanded FoV stereo images), the one or more processors are configured to: determine depth information for the stereo images captured by the front-facing cameras (see column 2, lines 52-57, capturing data by one or more sensors of one or more vehicles moving within the working environment, the data being indicative of depth within the working environment from respective sensors of the one or more vehicles to objects in the working environment at a plurality of different sensor poses); reproject pixel data from the images captured by at least one camera on a first side of the device into the second and third layers around an image captured by a first one of the two front-facing cameras (see figure 2A, 200 and 201 read as second set of cameras, see illustration in figure 2A and figure 6B above): PNG media_image2.png 706 982 media_image2.png Greyscale . BLASCO CLARET discloses extend the depth information outward from the one or more edges of each of the stereo images for a first distance to generate a second layer, wherein the stereo images are a first layer (see figure 6B illustration below): PNG media_image4.png 490 932 media_image4.png Greyscale . Ebrahimi Afrouzi and BLASCO CLARET are silent in disclosing extend a median depth determined from the depth information outward from one or more edges of the second layer for a second distance to generate a third layer. Shenoy (US 20150296319) discloses depth to the direction of arrival (azimuth) for each column of image pixel or regions can be performed using a mean or median depth value for the column group (see paragraph 108) but does not disclose extending a median depth determined from the depth information outward from one or more edges of the second layer for a second distance to generate a third layer. CONTACT INFORMATION Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEX LIEW (duty station is located in New York City) whose telephone number is (571)272-8623 (FAX 571-273-8623), cell (917)763-1192 or email alexa.liew@uspto.gov. Please note the examiner cannot reply through email unless an internet communication authorization is provided by the applicant. The examiner can be reached anytime. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MISTRY ONEAL R, can be reached on (313)446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALEX KOK S LIEW/Primary Examiner, Art Unit 2674 Telephone: 571-272-8623 Date: 1/13/26
Read full office action

Prosecution Timeline

Mar 12, 2024
Application Filed
Jan 15, 2026
Non-Final Rejection — §103, §DP
Mar 25, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597112
INSPECTION DEVICE, INSPECTION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12597144
ANTERIOR SEGMENT ANALYSIS APPARATUS, ANTERIOR SEGMENT ANALYSIS METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12597150
OBTAINING A DEPTH MAP
2y 5m to grant Granted Apr 07, 2026
Patent 12579795
DIAGNOSIS SUPPORT SYSTEM, DIAGNOSIS SUPPORT METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12572999
INCREASING RESOLUTION OF DIGITAL IMAGES USING SELF-SUPERVISED BURST SUPER-RESOLUTION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
95%
With Interview (+7.2%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 1094 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month