Prosecution Insights
Last updated: April 19, 2026
Application No. 18/676,895

CONTROL DEVICE, CONTROL METHOD, AND CONTROL PROGRAM

Non-Final OA §103
Filed
May 29, 2024
Examiner
BASHIR, ADEEL
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Fujifilm Corporation
OA Round
1 (Non-Final)
94%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 94% — above average
94%
Career Allow Rate
33 granted / 35 resolved
+32.3% vs TC avg
Moderate +7% lift
Without
With
+7.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
14 currently pending
Career history
49
Total Applications
across all art units

Statute-Specific Performance

§101
5.0%
-35.0% vs TC avg
§103
85.0%
+45.0% vs TC avg
§102
8.3%
-31.7% vs TC avg
§112
0.8%
-39.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 35 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Priority Acknowledgment is made of applicant’s foreign priority claim, for U.S. Application No. 18/676,895, based on a foreign application filed on 10/31/2022. Status of Claims Claims 1–18 are pending in the application. Claims 1-13, 15-18 are rejected. Claim 14 is objected to. Allowable Subject Matter Claim 14 is objected to as being dependent upon a rejected base claim(s), but would be allowable if rewritten in independent form including all of the limitations of the base claim(s) and any intervening claim(s). Overview of Grounds of Rejection Ground of Rejection Claim(s) Statute(s) Reference(s) Ground of Rejection 1 1, 2, 4, 5, 6, 7, 8, 9, 10, 11, 17, 18 § 103 Price et al. (US20200234498A1) in view of King et al. (US20110145068A1) and further in view of Cazier et al. (US20080122859A1) Ground of Rejection 2 3 § 103 Price et al. (US20200234498A1) in view of Marason et al. (US9462255B1) Ground of Rejection 3 12, 13, 15, 16 § 103 Price et al. (US20200234498A1) in view of King et al. (US20110145068A1) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. (Please see the cited paragraphs, sections, pages, or surrounding text in the references for the paraphrased content.) Ground of Rejection 1 Claims 1, 2, 4, 5, 6, 7, 8, 9, 10, 11, 17, 18 are rejected under 35 U.S.C. § 103 as being unpatentable over Price et al. (US20200234498A1) in view of King et al. (US20110145068A1) and further in view of Cazier et al. (US20080122859A1). As per Claim 1, Price et al. teach the following portion of Claim 1, which recites: “A control device comprising a processor,” Price et al. (US20200234498A1) teaches a device architecture that includes a processor, stating: “the computing architecture 700 includes processor 704, a system memory 706 and a system bus 708.” Price et al. (US20200234498A1), ¶ [0050]. Price’s “computing architecture” with “processor 704” reads on the claimed control device comprising a processor. Price et al. teach the following portion of Claim 1, which recites: “wherein the processor is configured to: generate third image data based on first image data and second image data;” Price et al. (US20200234498A1) teaches generating an output image view based on (i) image(s) of a physical object and (ii) virtual object content overlaid on that physical object. For the first image data (image(s) of the physical object), Price states: “A computing device may receive one or more images of the physical object …” Price et al. (US20200234498A1), ¶ [0004]. For generating a composite output that includes both the physical object imagery and the virtual object (corresponding to third image data based on first and second image data), Price states: “a composite view may be digitally rendered or generated, which includes the 3D virtual object … placed or overlaid on the physical object.” Price et al. (US20200234498A1), ¶ [0045]. Price’s “one or more images of the physical object” correspond to the claimed first image data, and Price’s “3D virtual object” corresponds to second image data. Price’s generated “composite view” that includes the virtual object overlaid on the physical object corresponds to generating third image data based on first image data and second image data. Price et al. teach the following portion of Claim 1, which recites: “and output the third image data,” Price et al. (US20200234498A1) teaches outputting the generated composite image view by display, stating: “a computing device 504 displays the composite view 500 …” Price et al. (US20200234498A1), ¶ [0035]. Displaying the generated “composite view” is an output of the generated composite image data (the claimed third image data). Price alone does not explicitly teach all the limitation(s) of the claim. However, when combined with King, they collectively teach some of the limitation(s). King teaches the following portion of Claim 1, which recites: “and a second image based on the second image data in a third image represented by the third image data is an image indicating that specific content, which is different from a content of the third image, is associated with the third image,” Price et al. (US20200234498A1) describes compositing real-world imagery with virtual content, but does not describe the “second image” as an indicator that different “specific content” is associated with the third image. King et al. (US20110145068A1) supplies this teaching. King et al. (US20110145068A1) teaches overlaying a displayed image with additional display elements and using indicators that denote associated digital content: King teaches putting “second image” type elements into an image view by overlay, stating: “provides an enhanced view of a document by overlaying a display showing the document with various display elements.” King et al. (US20110145068A1), ¶ [0161]. King teaches that indicators in the displayed image indicate associated digital content, stating: “display visual indicators, such as icons, codes, and so on, that indicate the association and/or availability of digital content.” King et al. (US20110145068A1), ¶ [0601]. King’s overlaid “display elements” and “visual indicators” correspond to the claimed second image based on the second image data in a third image (the enhanced view). King further teaches that these indicators indicate the association of digital content with the rendered item, which aligns with the claim’s requirement that the second image indicates that specific content, different from the third image’s own content, is associated with the third image. Price and King alone do not explicitly teach all the limitation(s) of the claim. However, when combined with Cazier, they collectively teach all of the limitation(s). Cazier teaches the following portion of Claim 1, which recites: “and surrounds a first image based on the first image data in the third image.” King et al. (US20110145068A1) teaches overlays and indicators, but does not require the indicator to surround the first image. Cazier et al. (US20080122859A1) supplies this teaching. Cazier et al. (US20080122859A1) teaches a border frame that surrounds an image, stating: “a user can select a border frame around an image in a picture.” Cazier et al. (US20080122859A1), ¶ [0001]. Cazier further describes the image fitting within the surrounding border, referring to “the area surrounded by the border 216.” Cazier et al. (US20080122859A1), ¶ [0026]. In Cazier, the “image” corresponds to the claimed first image based on the first image data, and the “border frame” corresponds to a second image that surrounds that first image within the displayed picture (the claimed third image). Before the effective filing date of the claimed invention, a person of ordinary skill in the art (POSITA) would have been motivated to combine Price et al. (US20200234498A1) with King et al. (US20110145068A1) because Price generates and outputs a composite view from image data and overlaid content, and King teaches overlaying visual indicators to show the association and availability of additional digital content, thereby enhancing user interaction with the displayed view. In applying King’s association indicators within Price’s composite scene, a POSITA would further have been motivated to incorporate Cazier et al. (US20080122859A1) because complex scenes can include multiple objects and background elements, and it can otherwise be unclear which physical object or image region is linked to the associated content. Cazier teaches using a border frame around an image, which delimits the spatial extent of the relevant object/image region. Integrating Cazier’s surrounding border into the Price and King system would improve identification of the linked object or region, reduce user confusion and selection errors, and provide a predictable, straightforward UI enhancement with expected results. PNG media_image1.png 13 460 media_image1.png Greyscale As per Claim 2, Price alone does not explicitly teach all the limitation(s) of the claim. However, when combined with Cazier, they collectively teach all of the limitation(s). Cazier teaches Claim 2 which recites: “The control device according to claim 1,wherein the second image is in contact with at least a part of an outer periphery of the first image, and does not include an internal region of the first image.” Cazier et al. (US20080122859A1) teaches a border/frame that is positioned at the edge of an image (periphery) and, in an embodiment, avoids overlaying the image interior: Cazier states a user can select a “border frame around an image” in a picture. Cazier et al. (US20080122859A1), ¶[0001]. Cazier further teaches that the system can “reduce or shrink down the image 210a to fit completely within the border 216 … instead of permitting an overlay … over the image 210a.” Cazier et al. (US20080122859A1), ¶[0025]. Cazier also describes “the area surrounded by the border 216.” Cazier et al. (US20080122859A1), ¶[0026]. A “border frame around an image” is a frame that is at the image boundary (in contact with at least part of the outer periphery), and Cazier’s “fit completely within the border … instead of permitting an overlay over the image” describes a border presentation that does not occupy the image interior, which aligns with “does not include an internal region of the first image.” The rationale and motivation to combine the references as set forth for claim 1 are incorporated herein by reference for the present claim. PNG media_image1.png 13 460 media_image1.png Greyscale As per Claim 4, Price alone does not explicitly teach all the limitation(s) of the claim. However, when combined with Cazier, they collectively teach all of the limitation(s). Cazier teaches Claim 4 which recites: “The control device according to claim 1, wherein the processor is configured to generate the third image data based on information on a display region in which the third image is to be displayed, the first image data, and the second image data.” Cazier et al. (US20080122859A1) teaches generating the output picture data using (i) display-region information (the border-defined region), together with (ii) first image data (the object image) and (iii) second image data (the border frame), for example: “the engine 180 can instead reduce or shrink down the image 210a to fit completely within the border 216 …” — Cazier et al., ¶[0025]. “The image 210a can be reduced to fit within the border 216 if … the stored area size of image 210a … was originally larger than the area surrounded by the border 216.” — Cazier et al., ¶[0026]. In Cazier, the processor generates the final displayed picture (third image data) by using the “area surrounded by the border” as display-region information, and generating the composite such that the image 210a (first image data) is shrunk to fit within the border 216 (second image data). This matches generating third image data based on display region + first image data + second image data. The rationale and motivation to combine the references as set forth for claim 1 are incorporated herein by reference for the present claim. PNG media_image1.png 13 460 media_image1.png Greyscale As per Claim 5, Price alone does not explicitly teach all the limitation(s) of the claim. However, when combined with Cazier, they collectively teach all of the limitation(s). Cazier teaches Claim 5 which recites: “The control device according to claim 1, wherein the second image is an image determined based on a color of the first image.” Cazier et al. (US20080122859A1) teaches determining the border (second image) based on the color(s) of the object image (first image), stating the “border frame color picker engine 180 determines the color 205 for the border frame 216 for an object image 210a.” Cazier et al., ¶[0022] The same reference further states the engine “selects the border frame color … by evaluating or selecting the pixels in the image,” and that the “color … evaluated in the image 210a … is then used to determine … a border frame color 205.” Cazier et al., ¶[0021], ¶[0026] Cazier’s border frame 216 (second image) is determined based on evaluated pixel color(s) of the object image 210a (first image), matching Claim 5. The rationale and motivation to combine the references as set forth for claim 1 are incorporated herein by reference for the present claim. PNG media_image1.png 13 460 media_image1.png Greyscale As per Claim 6, Price alone does not explicitly teach all the limitation(s) of the claim. However, when combined with Cazier, they collectively teach all of the limitation(s). Cazier teaches Claim 6 which recites: “The control device according to claim 1, wherein the second image is a frame image that surrounds at least a part of the first image.” Cazier et al. (US20080122859A1) teaches this limitation, stating: a user can select a “border frame around an image” (a frame image surrounding the image). Cazier et al., ¶[0001] Cazier further describes “the area surrounded by the border 216,” confirming the border surrounds the image region. Cazier et al., ¶[0026] The “border frame” is the claimed frame image (second image) and it surrounds at least part (typically all) of the “image” (first image). The rationale and motivation to combine the references as set forth for claim 1 are incorporated herein by reference for the present claim. PNG media_image1.png 13 460 media_image1.png Greyscale As per Claim 7, Price alone does not explicitly teach all the limitation(s) of the claim. However, when combined with King, they collectively teach all of the limitation(s). King teaches Claim 7 which recites: “The control device according to claim 1, wherein the first image is an image with which the specific content is associated.” King et al. (US20110145068A1) teaches associating digital content (specific content) with an image (first image), stating: the system “facilitates the association of rendered advertisements and other sources of information with digital content,” and “receives an image of a rendered advertisement” and “receives an indication of digital content to associate with the selected regions,” and “associates the digital content to the selected region” (including a region that “covers the entire image”). King et al., ¶[0612]-[0613]. King’s “image of a rendered advertisement” corresponds to the claimed first image, and King’s “digital content” corresponds to the claimed specific content, with King clearly teaching that the specific content is associated with the first image (or a region of that image). The rationale and motivation to combine the references as set forth for claim 1 are incorporated herein by reference for the present claim. PNG media_image1.png 13 460 media_image1.png Greyscale As per Claim 8, Price alone does not explicitly teach all the limitation(s) of the claim. However, when combined with King, they collectively teach all of the limitation(s). King teaches Claim 8 which recites: “The control device according to claim 1, wherein the second image is an image with which the specific content is associated.” King et al. (US20110145068A1) teaches that display elements (images such as icons/outlines) are associated with specific digital content/actions, stating: the system “presents display elements that are associated with the identified content and/or actions to perform over or along with an image of the rendered document …” — King et al., ¶[0587] and “The display elements may act as controls or indications that are associated with the content or performable actions.” — King et al., ¶[0588] King’s display elements (the claimed second image) are associated with the identified digital content (the claimed specific content). The rationale and motivation to combine the references as set forth for claim 1 are incorporated herein by reference for the present claim. PNG media_image1.png 13 460 media_image1.png Greyscale Price teaches Claim 9 which recites: “The control device according to claim 1, wherein the specific content includes augmented reality content.” Price et al. (US20200234498A1) teaches augmented reality content as the overlaid virtual content, for example: “Overlaying 3D augmented reality content on real-world objects …” and “techniques of overlaying a virtual object on a physical object in augmented reality (AR) … place or overlay a 3D virtual object on the physical object in AR …” — Price et al., ¶[0004]. Price’s 3D virtual object overlaid in AR is augmented reality content, which satisfies that the claimed specific content includes augmented reality content. PNG media_image1.png 13 460 media_image1.png Greyscale Price teaches Claim 10 which recites: “The control device according to claim 1, wherein the augmented reality content includes first augmented reality content that is played back in a case where a first object included in the first image is included in an imaging angle of view, and second augmented reality content that is played back regardless of whether or not the first object is included in the imaging angle of view.” Price et al. (US20200234498A1) teaches the first part by describing recognition-based AR that shows an overlay only when an object/marker is sensed (i.e., in view): “Recognition-based (or marker-based) AR uses a camera to identify visual markers or objects to showcase an overlay only when the marker is sensed by the device.” Price et al., ¶[0002] Price et al. (US20200234498A1) teaches the second part by describing location-based AR whose visualizations are activated based on non-visual inputs (not contingent on the marker/object being in view): “Location-based AR relies on GPS, a digital compass, a velocity meter, or an accelerometer … and the AR visualizations are activated based on these inputs.” Price et al., ¶[0002] Price’s “only when the marker is sensed” corresponds to AR content being played back when the first object is within the imaging angle of view, while Price’s location-input-triggered AR corresponds to AR content being played back regardless of whether the object is in view (since activation is based on GPS/compass/accelerometer inputs rather than object sensing). A POSITA would be motivated to combine these known AR techniques into a single device (like a smartphone, which Price explicitly mentions in ¶[0026]) to provide a robust user experience that works both with and without specific markers. PNG media_image1.png 13 460 media_image1.png Greyscale As per Claim 11, Price alone does not explicitly teach all the limitation(s) of the claim. However, when combined with King, they collectively teach all of the limitation(s). King teaches Claim 11 which recites: “The control device according to claim 1, wherein the second image includes an identification image, and the identification image is information with which a relative position of the identification image in the third image is specifiable.” King et al. (US20110145068A1) teaches the “identification image” as overlaid icons/outlines within the displayed image, and teaches that their relative position in the displayed image is determined/specifiable from extracted location information: “the system superimposes icons, colors, graphical outlines … and other display elements over and/or along with an image of the rendered document …” – King et al., ¶[0595] the system extracts “features corresponding to the location of the information … such as the position of words, lines, paragraphs … within a page …” to “generate and display a markup layer over the captured image” – King et al., ¶[0231] “the extraction process identifies the location of the text within a capture … [and] may … generate boundaries … within the captured image” – King et al., ¶[0234] King’s overlaid icons/graphical outlines serve as the claimed identification image within the third image, and King’s extracted location/position information (and generated boundaries) makes the relative position of those identification images in the third image specifiable. The rationale and motivation to combine the references as set forth for claim 1 are incorporated herein by reference for the present claim. PNG media_image1.png 13 460 media_image1.png Greyscale Method Claim 17 does not include any additional limitations that would significantly distinguish it from claim 1. Therefore, it is likewise rejected under 35 U.S.C. § 103 in view of the same references and for the same reasons set forth above. PNG media_image1.png 13 460 media_image1.png Greyscale CRM Claim 18 does not include any additional limitations that would significantly distinguish it from claim 1. Therefore, it is likewise rejected under 35 U.S.C. § 103 in view of the same references and for the same reasons set forth above. PNG media_image1.png 13 460 media_image1.png Greyscale Ground of Rejection 2 Claim 3 is rejected under 35 U.S.C. § 103 as being unpatentable over as being unpatentable over Price et al. (US20200234498A1) in view of Marason et al. (US9462255B1). As per Claim 3, Price alone does not explicitly teach all the limitation(s) of the claim. However, when combined with Marson, they collectively teach all of the limitation(s). Marason teaches Claim 3 which recites: “The control device according to claim 1, wherein the processor is configured to output the third image data to a projection portion.” Marason et al. (US9462255B1) teaches outputting image content for projection via a projector (a “projection portion”), for example: “[0002] … a room equipped with computerized projection and imaging systems that enable presentation of images … from partial augmentation, such as projecting a single image onto a surface …” — Marason et al. (US9462255B1), ¶[0002]. “[0006] … a projection and image capturing system … having a chassis to hold a projector and camera …” — Marason et al. (US9462255B1), ¶[0006]. “[0007] … creating an augmented reality environment by projecting a structured light pattern on a scene …” — Marason et al. (US9462255B1), ¶[0007]. Marason’s system includes a projector (the claimed projection portion) and performs projecting of image/pattern content, which requires the system to output the corresponding image data to that projection portion. Before the effective filing date of the claimed invention, a person of ordinary skill in the art (POSITA) would have been motivated to combine Price et al. (US20200234498A1) with Marason et al. (US9462255B1) because Price teaches generating a composite view (third image data) from multiple image sources and outputting that view, while Marason teaches delivering image content to a projector in a projection-and-imaging system to present images on a surface. Incorporating Marason’s projection output into Price’s composite-view generation would be a straightforward substitution of one known output modality (projection) for another (display), improving usability for shared or surface-based viewing and producing predictable results (the same composite imagery is presented, but via a projection portion). PNG media_image1.png 13 460 media_image1.png Greyscale Ground of Rejection 3 Claims 12, 13, 15, 16 are rejected under 35 U.S.C. § 103 as being unpatentable over as being unpatentable over Price et al. (US20200234498A1) in view of King et al. (US20110145068A1) . As per Claim 12, Price teaches the following portion of Claim 12 which recites: “A control device comprising a processor,” Price et al. (US20200234498A1) teaches a device having a processor: “computing architecture 700 includes processor 704 …” Price et al., ¶[0050]. Price teaches the following portion of Claim 12 which recites: “the processor is configured to: generate third image data based on first image data and second image data;” Price et al. (US20200234498A1) teaches generating a composite view from images of a physical object and overlaid virtual content: “receive one or more images of the physical object … place or overlay a 3D virtual object on the physical object … [and] generate a composite view …” Price et al., ¶[0004]. Price further states: “a composite view may be … generated … includes the 3D virtual object … overlaid on the physical object.” Price et al., ¶[0045]. Price teaches the following portion of Claim 12 which recites: “and output the third image data,” Price et al. (US20200234498A1) teaches outputting by display: “computing device 504 displays the composite view 500 …” Price et al., ¶[0035]. Price alone does not explicitly teach all the limitation(s) of the claim. However, when combined with King, they collectively teach some of the limitation(s). King teaches the following portion of Claim 12, which recites: “a second image based on the second image data in a third image represented by the third image data is an image that is included in a first image based on the first image data in the third image and that indicates a position of an image of a first object with which specific content, which is different from a content of the third image, is associated.” King et al. (US20110145068A1) teaches overlaying an identification image within a captured image to show where associated content exists: “superimposes icons … graphical outlines … and other display elements over … an image … informs the user that regions within an image are ‘active.’” King et al., ¶[0595]. King also teaches associating “specific content” to a selected region of the image: the system receives “a selection of a portion of the image … [and] digital content to associate … [and] associates the digital content to the selected region.” King et al., ¶[0613]. King further teaches identifying the location/position used for the overlay: extracting “features corresponding to the location … such as the position …” and generating “boundaries … within the captured image.” King et al., ¶[0231], ¶[0234]. King’s overlaid icons/outlines/markup are the claimed second image included in the first image, and they indicate the position of the object/region whose associated digital content is available (content different from the displayed image itself). Before the effective filing date of the claimed invention, a POSITA would have combined Price et al. (US20200234498A1) with King et al. (US20110145068A1) because Price teaches generating and outputting a composite view from captured imagery plus overlaid content, and King teaches overlaying in-image indicators tied to specific regions to show where associated digital content is available, improving usability and reducing ambiguity in complex scenes with predictable results. PNG media_image1.png 13 460 media_image1.png Greyscale As per Claim 13, Price alone does not explicitly teach all the limitation(s) of the claim. However, when combined with King, they collectively teach all of the limitation(s). King teaches Claim 13 which recites: “The control device according to claim 12, wherein the image of the first object is an image that moves in the third image, and the second image is an image that moves following movement of the image of the first object.” King et al. (US20110145068A1) teaches both parts: First-object image moves in the third image: King states that when the user moves the mobile device or the target document, the “movement … causes the image within the field of view … to change (e.g., from one portion of the document to another portion).” King et al., ¶[0598] Second image moves following the first object’s movement: King further states the system “dynamically updating the display elements presented to a user when the user moves … the mobile device … or a rendered document …” and “may dynamically update presented display elements based on what is in the field of view.” King et al., ¶[0598] In King, the captured real-time image (third image) changes as the viewed object/document moves in the field of view, and the overlaid display elements (second image) are dynamically updated to follow that movement, satisfying Claim 13. The rationale and motivation to combine the references as set forth for claim 12 are incorporated herein by reference for the present claim. PNG media_image1.png 13 460 media_image1.png Greyscale As per Claim 15, Price alone does not explicitly teach all the limitation(s) of the claim. However, when combined with King, they collectively teach all of the limitation(s). King teaches Claim 15 which recites: “The control device according to claim 12, wherein the processor is configured to generate the third image data representing the third image including the second image based on information on a feature region of a display region in which the third image is to be displayed, and the position of the image of the first object in the first image.” King et al. (US20110145068A1) teaches generating an enhanced image (third image) including overlay highlights (second image) based on a “region of interest” (feature region) and its location/position in the image/display: “information may include … overlaying highlights on the captured images to indicate possible regions of interest …” — King et al., ¶[0050] “a main region of interest is suggested … based on location in the image, such as the center of a screen of a capture device …” — King et al., ¶[0050] King’s “overlaying highlights” corresponds to the claimed second image included in the third image, and King generates that overlay based on the region of interest (a feature region of the display) and its location/position in the image (the claimed position of the first-object image/region in the first image). The rationale and motivation to combine the references as set forth for claim 12 are incorporated herein by reference for the present claim. PNG media_image1.png 13 460 media_image1.png Greyscale As per Claim 16, Price alone does not explicitly teach all the limitation(s) of the claim. However, when combined with King, they collectively teach all of the limitation(s). King teaches Claim 16 which recites: “The control device according to claim 12, wherein the second image is a frame image that surrounds at least a part of the image of the first object.” King et al. (US20110145068A1) teaches using graphical outlines (a frame-type overlay) over an image to mark active regions: “superimposes … graphical outlines … over and/or along with an image … [so] regions within an image are ‘active.’” King et al., ¶[0595]. King also teaches generating boundaries within the captured image (“generate boundaries corresponding to words and paragraphs within the captured image”), which supports an outline that surrounds a region/object in the image. King et al., ¶[0234]. The rationale and motivation to combine the references as set forth for claim 12 are incorporated herein by reference for the present claim. PNG media_image1.png 13 460 media_image1.png Greyscale Conclusion The prior art made of record and relied upon in this action is as follows: Patent Literature: Marason (US9462255B1) — “Projection and Camera System for Augmented Reality Environment” Price (US20200234498A1) — “Overlaying 3D augmented reality content on real-world objects using image segmentation” Cazier (US20080122859A1) — “Border frame color picker” King (US20110145068A1) — “Associating rendered advertisements with digital content” Non-Patent Literature (NPL): (none) Note: A PDF copy of each NPL reference is attached with this Office Action. URLs are included for applicant convenience. If a link becomes unavailable in the future, the citation information may be used to locate the reference or access archived versions via the Wayback Machine. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and is listed as follows: Patent Literature: Baudisch (US20070025723A1) — “Real-time preview for panoramic images” Non-Patent Literature (NPL): (none) Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADEEL BASHIR whose telephone number is (571) 270-0440. The examiner can normally be reached Monday-Thursday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached on (571) 276-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ADEEL BASHIR/ Examiner, Art Unit 2616 /DANIEL F HAJNIK/Supervisory Patent Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

May 29, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §103
Mar 09, 2026
Interview Requested
Mar 26, 2026
Applicant Interview (Telephonic)
Apr 02, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597209
USING POLYGON MESH RENDER COMPOSITES DURING NEURAL RADIANCE FIELD (NERF) GENERATION
2y 5m to grant Granted Apr 07, 2026
Patent 12586333
AUTOMATED METHOD FOR GENERATING PROSTHESIS FROM THREE DIMENSIONAL SCAN DATA, APPARATUS GENERATING PROSTHESIS FROM THREE DIMENSIONAL SCAN DATA AND COMPUTER READABLE MEDIUM HAVING PROGRAM FOR PERFORMING THE METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12586302
RENDERING HAIR
2y 5m to grant Granted Mar 24, 2026
Patent 12573126
SPLIT BOUNDING VOLUMES FOR INSTANCES
2y 5m to grant Granted Mar 10, 2026
Patent 12555280
VECTOR GRAPHICS BASED LIVE SKETCHING METHODS AND SYSTEMS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
94%
Grant Probability
99%
With Interview (+7.4%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 35 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month