Prosecution Insights
Last updated: April 18, 2026
Application No. 18/220,661

FEATURE LAYERS FOR RENDERING OF DESIGN OPTIONS

Non-Final OA §103
Filed
Jul 11, 2023
Examiner
CHEN, YU
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Wts Paradigm LLC
OA Round
3 (Non-Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
98%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
711 granted / 1052 resolved
+5.6% vs TC avg
Strong +30% interview lift
Without
With
+29.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
110 currently pending
Career history
1162
Total Applications
across all art units

Statute-Specific Performance

§101
2.2%
-37.8% vs TC avg
§103
43.9%
+3.9% vs TC avg
§102
27.0%
-13.0% vs TC avg
§112
20.7%
-19.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1052 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/09/2025 has been entered. Response to Amendment This is in response to applicant’s amendment/response filed on 10/09/2025, which has been entered and made of record. Claims 1, 11, 20 have been amended. No claim has been cancelled. Claim 22 has been added. Claims 1-22 are pending in the application. Response to Arguments Applicant’s arguments on 10/09/2025 have been fully considered but are moot because the arguments do not apply to any of the references being used in the current rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-21 are rejected under 35 U.S.C. 103 as being unpatentable over Cerrato et al. (US Pub 2023/0141027 A1) in view of Cini (US Pub 2022/0406027 A1), further in view of Wang et al. (US Pub 2019/0095985 A1). As to claim 1, Cerrato discloses a system comprising: an electronic processor configured to access feature data, the feature data including a set of features of a virtual space, wherein each feature included in the set of features is associated with at least one feature parameter (Cerrato, ¶0036, “A virtual environment for a content such as a video game typically includes a number of respective graphical features. There are a number of types of graphical feature associated with virtual environments, but they can generally be considered to fall into two broad categories; the first is scenery, which is typically static with respect to an origin or other reference point of the virtual environment. The second is virtual objects which may or may not be static with respect to the virtual environment and may have different degrees of relative mobility.”), generate a set of sample electronic renderings for the virtual space for identifying a set of feature layers based on the feature data (Cerrato, ¶0041, “A static graphical feature corresponds to a feature that has a substantially same position in two or more successive images output for display irrespective of changes in the viewpoint for the virtual camera. For example, for a content such as a video game having one or more user interface elements, such as a health bar UI or score bar UI, the user interface elements may be included in an image frame that is output for display such that a position of the user interface element in the image frames remains unchanged regardless of changes in the position and/or orientation of the viewpoint of the virtual camera. Alternatively or in addition, a graphical feature such as a helmet worn by an in-game character or a weapon held by an in-game character, may be included in an image frame that is output for display such that a position of the helmet or weapon in the image frame remains unchanged regardless of changes in the position and/or orientation of the viewpoint of the virtual camera.” “For virtual environments including such static graphical features, these graphical features can be allocated to the static re-projection layer so that the static graphical features can be rendered with a rendering quality associated with the static re-projection layer. In some examples, a graphical feature allocated to the static re-projection layer may be initially rendered by the rendering circuitry 220 and then subsequently re-projected for each image frame output by the output image generator 230. Alternatively, a graphical feature allocated to the static layer may be rendered at a lower rendering rate and re-projected to obtain a suitable frame rate for the graphical feature allocated to the static re-projection layer.” Renderings of static graphical features corresponds to sample electronic renderings.), determine rendering data based on the set of sample electronic renderings, wherein the rendering data includes the set of feature layers, wherein each feature layer is associated with at least one feature of the set of features (Cerrato, ¶0040, “a likelihood that a given graphical feature will exhibit movement in a sequence of image frames generated for a virtual camera can be related to one or more properties for the graphical feature such as motion with respect to other graphical features and/or motion with respect to a virtual camera and/or depth with respect to the virtual camera. Therefore, using various data associated with a graphical feature in a virtual environment, the allocation circuitry 210 can accordingly allocate the graphical feature to a respective layer so that rendering of the graphical feature uses a rendering quality associated with that respective layer.” ¶0041, “A static graphical feature corresponds to a feature that has a substantially same position in two or more successive images output for display irrespective of changes in the viewpoint for the virtual camera. For example, for a content such as a video game having one or more user interface elements, such as a health bar UI or score bar UI, the user interface elements may be included in an image frame that is output for display such that a position of the user interface element in the image frames remains unchanged regardless of changes in the position and/or orientation of the viewpoint of the virtual camera. Alternatively or in addition, a graphical feature such as a helmet worn by an in-game character or a weapon held by an in-game character, may be included in an image frame that is output for display such that a position of the helmet or weapon in the image frame remains unchanged regardless of changes in the position and/or orientation of the viewpoint of the virtual camera.” ¶0049, “a relatively mobile graphical feature, such as a virtual car in the virtual environment, can be allocated to the mobile layer by the allocation circuitry 210 depending on one or more properties for the virtual car, whereas a relatively less mobile graphical feature such as the sky portion (being relatively less mobile in that the sky portion remains relatively unchanged for a series of image frames or that the sky portion does not move with respect to a reference point), can be allocated to the mobile projection layer.” ¶0067, “The techniques to be discussed below relate to using one or more properties associated with a graphical feature to allocate graphical features in a virtual environment to respective layers to optimise use of rendering resources. Execution of a video game, or other application, by a game engine may generate a sequence of video images, a sequence of in-game virtual camera positions and a sequence of depth buffer values, and a recording of such data for a previously executed video game may be used by the allocation circuitry 210 to perform an offline allocation of graphical features for the virtual environment. Alternatively or in addition, the data processing apparatus 200 can be configured to allocate one or more graphical features for a virtual environment to respective layers as part of an online process whilst images are being output by the image generator and properties of graphical features in the images output by the image generator can be used for performing the allocation.” ¶0068, “the data processing apparatus 200 may initially allocate the graphical features for the virtual environment to the mobile layer, and the allocation circuitry 210 can be configured to allocate a graphical feature to another layer different from the mobile layer in dependence upon one or more properties of the graphical feature identified from one or more of the images output by the output image generator. Hence more generally, in some examples the allocation circuitry 210 can be configured to allocate graphical features to the mobile layer as a default and subsequently re-allocate a graphical feature from the mobile layer to another layer in dependence upon one or more properties for the graphical feature.” ¶0071, “The allocation circuitry 210 can be configured to receive information indicative of a position of a graphical feature with respect to the virtual environment (e.g. with respect to a predetermined point in the virtual environment) and to allocate the virtual avatar to a respective layer in dependence upon changes in the position of the graphical feature over a period of time. In a simplest case in which the plurality of layers comprises the mobile layer and the static re-projection layer, the allocation circuitry 210 is configured to allocate the graphical feature to either the mobile layer or the static re-projection layer in dependence upon whether the information indicates that the position of the graphical feature with respect to the virtual environment is changeable.” ¶0072-0073. ¶0078, “features that exhibit no motion relative to the virtual camera (i.e. are fixed with respect to the virtual camera) can be identified and the allocation circuitry can allocate such features to static re-projection layer (or to the mobile re-projection layer if the static re-projection layer is not provided). For example, a graphical feature such as a weapon held by an avatar or helmet worn by an avatar which moves with the avatar may thus exhibit no motion relative to the virtual camera when the virtual camera corresponds to the position of the avatar and such features can be identified as being static and thus allocated accordingly to a layer with a low associated rendering quality. Conversely, features that exhibit a high degree of motion relative to the virtual camera and which are thus likely to have a changeable position in image frames generated for the virtual camera can be identified an allocated accordingly to layer with a higher associated rendering quality.” ¶0081, “the allocation circuitry 210 is configured to allocate the graphical feature to the respective layer in dependence upon a depth of the graphical feature with respect to the viewpoint of the virtual camera. Alternatively or in addition to using a motion associated with a graphical feature to allocate the graphical feature, the allocation circuitry 210 can be configured to allocate a graphical feature to a respective layer according to a depth of the graphical feature in the virtual environment relative to the virtual camera. A graphical feature located in a foreground portion of a scene is more likely to be of more importance for viewer's experience and/or is more likely to be subject to changes in position in an image generated for the virtual camera than a graphical feature located in a background portion of the scene.” ¶0085, “the allocation circuitry 210 is configured to allocate at least one graphical feature currently allocated to a respective layer of the plurality of layers to another respective layer of the plurality of layers in dependence upon one or more of the properties for the graphical feature to thereby reallocate the graphical feature.”), based on an analysis of one or more sample electronic renderings of the set of sample electronic renderings, determine a feature-to-feature influence within the set of features (Cerrato, ¶0040, “one or more properties for the graphical feature such as motion with respect to other graphical features“. ¶0070, “allocate a graphical feature to a respective layer in dependence upon motion associated with the graphical feature. As discussed above, motion for a respective graphical feature with respect to other graphical features in the virtual environment” ¶0079, “the allocation circuitry 210 is configured to allocate the graphical feature to the mobile layer in dependence upon whether the motion associated with the graphical feature exceeds a predetermined threshold.” ¶0086, “a given graphical feature in the virtual environment may initially exhibit a high degree of movement with respect to other graphical features in the virtual environment and/or the virtual camera and may thus have been initially allocated to the mobile layer.” “a graphical feature such as a virtual weapon may initially move with respect to the virtual camera when operated by an avatar (e.g. an enemy avatar in a game) and the virtual weapon may subsequently be acquired by an avatar corresponding to the virtual camera such that the virtual weapon exhibits a smaller amount of motion (or potentially no motion) relative to the virtual camera. In such cases, the allocation circuitry 210 can reallocate the graphical feature from the mobile layer to another layer having a lower rendering quality responsive to a change in at least one property associated with the graphical feature. In a similar manner, a feature that initially does not move with respect to the virtual environment may be picked up by an avatar thus resulting increased motion for the graphical feature and a reallocation can be performed to allow rendering for the graphical feature using a layer with improved rendering quality.”); receive a rendering request for the virtual space, the rendering request indicating a subset of features and associated feature parameters selected from the set of features as selected feature data (Cerrato, ¶0043, “The mobile layer differs from the mobile re-projection layer in that a higher rendering quality is associated with the mobile layer, and re-projection can be used for updating a rendered graphical feature allocated to the mobile re-projection layer in dependence upon changes in the position and/or orientation for the viewpoint of the virtual camera. The mobile re-projection layer differs from the static re-projection layer in that a higher rendering quality is used for the mobile re-projection layer (e.g. by rendering with a higher frame rate and/or image resolution), and whereas re-projection is used for updating a rendered graphical feature for the mobile re-projection layer in dependence upon changes in the position and/or orientation for the viewpoint of the virtual camera, a rendered graphical feature allocated to the static re-projection layer is re-projected to have substantially the same position in each image frame and is thus re-projected irrespective of changes in the position and/or orientation for the viewpoint of the virtual camera.” ¶0047, “one or more first graphical features allocated to a first layer (e.g. mobile layer) are rendered by the rendering circuitry 220 with a first rendering quality and one or more second graphical features allocated to a second layer (e.g. mobile re-projection layer or static re-projection layer) are rendered by the rendering circuitry 220 with a second rendering quality different from the first rendering quality. Hence more generally, at least some of the graphical features for the virtual environment are allocated to the layers to sort the graphical features into layers so that the graphical features can be rendered and different rendering resources can be used appropriately. In this way, rendering resources can be used in a targeted manner so that higher quality rendering can be performed for a graphical feature allocated to the mobile layer and lower quality rendering can be performed for a graphical feature allocated to the mobile re-projection layer or static re-projection layer.”), and in response to receiving the rendering request, generate, using the rendering data and according to the rendering request, an electronic rendering of the virtual space based on the selected feature data (Cerrato, ¶0048, “an image for display can be generated by the output image generator 230 based on rendered graphical features for a single layer of the plurality of layers. For example, in the case of an image frame comprising just a sky portion of a virtual environment, the graphical feature corresponding to the sky may be allocated to a respective layer so that the rendering resources associated with that respective layer are used by the rendering circuitry 220 for rendering the sky portion and the output image generator 230 generates the image frame for display. Alternatively, for a more feature rich image frame comprising a number of respective graphical features, one or more of the graphical features are rendered with a first rendering quality associated with a first layer and one or more of the graphical features are rendered with a second rendering quality associated with a second layer, and the output image generator 230 generates an image for display comprising graphical features allocated to the respective different layers and thus having been rendered with different rendering qualities. As such, the output image generator 230 is configured to combine one or more rendered graphical features allocated to a first layer with one or more rendered graphical features allocated to a second layer to generate the image frame.”). Assuming, arguendo, that Cerrato does not disclose based on an analysis of one or more sample electronic renderings of the set of sample electronic renderings, determine a feature-to-feature influence within the set of features. Cini teaches based on an analysis of one or more sample electronic renderings of the set of sample electronic renderings, determine a feature-to-feature influence within the set of features (Cini, ¶0053, “classifies features 120, attributes, and/or combinations of features 120 and/or attributes to one or more global styles most probably associated with such features 120, attributes, and/or combinations of features 120 and/or attributes.” ¶0057, “Short distances between features 120, attributes, and/or combinations thereof to global style attributes and a cluster may indicate a higher degree of similarity between a feature 120, attribute, and/or combination thereof to global style attributes and a particular cluster. Longer distances between features 120, attributes, and/or combinations thereof to global style attributes and a cluster may indicate a lower degree of similarity between the features 120, attributes, and/or combinations thereof to global style attributes and a particular cluster.” ¶0058, “indicating a high degree of similarity between a feature 120, attribute, and/or combination thereof and a particular data entry cluster representing a global style attribute.” ¶0058, “indicative of greater degrees of similarity. Degree of similarity index values may be compared to a threshold number indicating a minimal degree of relatedness suitable for inclusion of a feature 120, attribute, and/or combination thereof in a cluster, where degree of similarity indices falling under the threshold number may be included as indicative of high degrees of relatedness.” ¶0076, “where first attribute 144 includes a given amount of weight on a wall-mounted feature 120, second attribute 156 may include an associated structural attribute requiring a stud or other supporting element to be present in a wall where the wall-mounted feature 120 is located, which modeling device 104 may use to select second attribute 156. Populating the plurality of attributes 116 may include generating the second attribute 156 as a function of first feature 140; for instance, first feature 140 may be associated in feature attribute table 308 with an infrastructural attribute such as a requirement to be connected to an electrical, gas, and/or water supply, which modeling device 104 may use to select second attribute 156.” ¶0084, “modeling device 104 may designed and configured to select a vector representing a feature 120, combination of features 120, and/or combination of features 120 with attributes, using a loss function by generating a loss function of the plurality of vectors representing features 120, combinations of features 120, and/or combinations of features 120 with attributes and feature 120 goal vector, minimizing the loss function, and selecting a vector from the plurality of vectors representing features 120, combinations of features 120, and/or combinations of features 120 with attributes as a function of minimizing the loss function; a first feature 140 may be selected from and/or presented using a vector selected via minimizing the loss function, for instance by tracking, in memory of modeling device 104, which features 120 and/or attributes were used to generate vectors input to the loss function. A “loss function,” as used herein is an expression of an output of which an optimization algorithm minimizes to generate an optimal result. As a non-limiting example, modeling device 104 may select a feature 120, combination of features 120, and/or combination of features 120 with attributes having an associated vector that minimizes a measure of difference from feature 120 goal vector; measure of difference may include, without limitation, a measure of geometric divergence between vector representing feature 120, combination of features 120, and/or combination of features 120 with attributes and feature 120 goal vector, such as without limitation cosine similarity, or may include any suitable error function measuring any degree of divergence and/or aggregation of degrees of divergence, between attributes of feature 120 goal vector and vector representing feature 120, combination of features 120, and/or combination of features 120 with attributes. Selection of different loss functions may result in identification of different vectors as generating minimal outputs. Alternatively or additionally, each of feature 120 goal vector and each vector representing a feature 120, combination of features 120, and/or combination of features 120 with attributes may be represented by a mathematical expression having the same form as mathematical expression; modeling device 104 may compare the former to the latter using an error function representing average difference between the two mathematical expressions. Error function may, as a non-limiting example, be calculated using the average difference between coefficients corresponding to each variable.” ¶0099, “performed in the form of an interactive decision game that pieces together pre-designed modules, optionally with variations real time changing variables and variations, into established floor plans, site plans or into a blank slate environment.”). Cerrato and Cini are considered to be analogous art because all pertain to image rendering. It would have been obvious before the effective filing date of the claimed invention to have modified Cerrato with the features of “based on an analysis of one or more sample electronic renderings of the set of sample electronic renderings, determine a feature-to-feature influence within the set of features.” as taught by Cini. The suggestion/motivation would have been in order to account for many different factors, including not only the appearance of the final design, but the way in which elements needed to effect the design interact, and the practical consequences (Cini, ¶0003). Cerrato does not disclose the set of feature layers representing partial renderings of the virtual space to be stored in a memory for later use in an on-demand rendering, the rendering request retrieving, from the memory, the set of feature layers that match the rendering request and correspond to the selected feature data, the electronic rendering of the virtual space including: a first part of the virtual space rendered in real time based on unavailability of a pre-rendered feature layer, and a second part of the virtual space rendered based on the retrieved set of feature layers. Wang teaches the set of feature layers representing partial renderings of the virtual space to be stored in a memory for later use in an on-demand rendering (¶0219, “The list of to-be-collocated transaction objects includes a plurality of to-be-collocated transaction objects. The to-be-collocated transaction objects refer to transaction objects having collocation attribute information, that is, transaction objects that may be collocated with other transaction objects for display. For example, the to-be-collocated transaction objects may be transaction objects for house decoration, transaction objects of clothing, or the like. For example, the transaction objects for house decoration may include sofa, tea tables, desks, chairs, etc.; and the transaction objects of clothing may include shirts, pants, suits, ties, leather shoes, etc.” ¶0269, “a transaction object and collocation template acquiring unit 410 configured to acquire a list of to-be-collocated transaction objects and acquire a preset collocation template” ¶0291, “A collocation rendering displaying request corresponding to a list of specific to-be-collocated transaction objects and a specific preset collocation template is sent to a server.” The to-be-collocated transaction objects are the partial rendering of the virtual space because they were rendered as image before the collocation process.), On-demand rendering, the rendering request retrieving, from the memory, the set of feature layers that match the rendering request and correspond to the selected feature data (¶0254, “the collocation template defines categories of transaction objects included in the collocation rendering, as well as image layer identifiers, image layer positions and image layer sizes of image layers where the transaction objects of the categories are located. FIG. 2 shows the class of transaction objects including decorative painting 202(1), decorative painting 202(1), chair 204, sofa 206, cabinet 208, and tea table 210.” ¶0259, “filling the images of the to-be-collocated transaction objects into different image layers of the preset collocation template. As images of different transaction objects are located in different image layers, the corresponding relation, that is, the layout information of the collocation rendering, may be extracted according to the collocation template into which the images have been filled.” ¶0277, “a filling subunit configured to fill, according to the category attributes of the to-be-collocated transaction objects, images of the to-be-collocated transaction objects into corresponding spatial positions of the preset collocation template” ¶0279, “the filling subunit is configured to fill the images of the to-be-collocated transaction objects into different image layers of the preset collocation template” ¶0351, “Images of to-be-displayed items are filled into different image layers of an image container.”), the electronic rendering of the virtual space including: a first part of the virtual space rendered in real time based on unavailability of a pre-rendered feature layer, and a second part of the virtual space rendered based on the retrieved set of feature layers (¶0255, “the step of generating image information of a collocation rendering of the to-be-collocated transaction objects includes the following steps: 2.1) filling, according to the category attributes of the to-be-collocated transaction objects, images of the to-be-collocated transaction objects into corresponding spatial positions of the preset collocation template; and 2.2) generating image information of the collocation rendering according to the preset collocation template into which the images have been filled.” ¶0260, “1) extracting a corresponding relation between the images and layout modes of the images from the preset collocation template into which the images have been filled; and 2) using the extracted corresponding relation as the layout information of the collocation rendering.” ¶0278, “a generating subunit configured to generate image information of the collocation rendering according to the preset collocation template into which the images have been filled.” Images of the to-be-collocated transaction objects are the second part of the virtual space rendered based on the retrieved set because they are existing images. The generated collocation image is the first part of virtual space rendered in real time because it is not pre-rendered beforehand.). Cerrato, Cini and Wang are considered to be analogous art because all pertain to image rendering. It would have been obvious before the effective filing date of the claimed invention to have modified Cerrato with the features of “the set of feature layers representing partial renderings of the virtual space to be stored in a memory for later use in an on-demand rendering, the rendering request retrieving, from the memory, the set of feature layers that match the rendering request and correspond to the selected feature data, the electronic rendering of the virtual space including: a first part of the virtual space rendered in real time based on unavailability of a pre-rendered feature layer, and a second part of the virtual space rendered based on the retrieved set of feature layers.” as taught by Wang. The suggestion/motivation would have been in order to generate a collocation rendering of a plurality of transaction objects (Wang, ¶0007). As to claim 2, claim 1 is incorporated and the combination of Cerrato, Cini and Wang discloses determine, based on the feature-to-feature influence, the set of feature layers organizing the subset of features having a threshold level of the feature-to-feature influence on a same one of the feature layers of the set. (Cerrato, ¶0070, “the allocation circuitry 210 is configured to allocate a graphical feature to a respective layer in dependence upon motion associated with the graphical feature. As discussed above, motion for a respective graphical feature with respect to other graphical features in the virtual environment and/or with respect to a reference point in the virtual environment and/or with respect to the viewpoint of the virtual camera can be used to allocate the respective graphical feature.” ¶0071, “The allocation circuitry 210 can be configured to receive information indicative of a position of a graphical feature with respect to the virtual environment (e.g. with respect to a predetermined point in the virtual environment) and to allocate the virtual avatar to a respective layer in dependence upon changes in the position of the graphical feature over a period of time. In a simplest case in which the plurality of layers comprises the mobile layer and the static re-projection layer, the allocation circuitry 210 is configured to allocate the graphical feature to either the mobile layer or the static re-projection layer in dependence upon whether the information indicates that the position of the graphical feature with respect to the virtual environment is changeable.”. ¶0075, “the allocation circuitry 210 is configured to allocate the graphical feature to the respective layer in dependence upon motion vector information for the graphical feature.” ¶0081, “the allocation circuitry 210 is configured to allocate the graphical feature to the respective layer in dependence upon a depth of the graphical feature with respect to the viewpoint of the virtual camera.” ¶0085, “the allocation circuitry 210 is configured to allocate at least one graphical feature currently allocated to a respective layer of the plurality of layers to another respective layer of the plurality of layers in dependence upon one or more of the properties for the graphical feature to thereby reallocate the graphical feature.” Cini, ¶0058, “indicative of greater degrees of similarity. Degree of similarity index values may be compared to a threshold number indicating a minimal degree of relatedness suitable for inclusion of a feature 120, attribute, and/or combination thereof in a cluster, where degree of similarity indices falling under the threshold number may be included as indicative of high degrees of relatedness.”). As to claim 3, claim 2 is incorporated and Cerrato does not disclose the subset of features having the feature-to-feature influence includes at least one of an appliance having a reflective surface, a cabinet, and a light fixture. Cini teaches the subset of features having the feature-to-feature influence includes at least one of an appliance having a reflective surface, a cabinet, and a light fixture (Cini, ¶0036, “plurality of attributes 116 may include a lighting attribute, defined an attribute that affects ambient light in interior space. Lighting attribute may include isolated and/or aggregate effects items represented by features 120 have on lighting in internal space; such changes may include, without limitation, changes representing increased or decreased light output from light fixtures, decreased or increased light admitted by windows, addition of shades, frosted glass, or other elements that occlude or reduce light output or transmittance, increases or decreases in reflectiveness of item, increases or decreases in fluorescence or phosphorescence, increases or decreases in opacity or translucence, or the like. Lighting attribute may include a pattern of light and/or shadow cast on visual representation of features 120 and may be modified to reflect change; visual representation of second feature 160 may be changed accordingly as well. Lighting attributes such as light and shadow patterns in three-dimensional model may be tracked and represented as an attribute of three-dimensional model and/or of interior space data structure 112 containing the three-dimensional model”). Cerrato and Cini are considered to be analogous art because all pertain to image rendering. It would have been obvious before the effective filing date of the claimed invention to have modified Cerrato with the features of “the subset of features having the interactivity includes at least one of an appliance having a reflective surface, a cabinet, and a light fixture” as taught by Cini. The suggestion/motivation would have been in order to defined an attribute that affects ambient light in interior space (Cini, ¶0036). As to claim 4, claim 1 is incorporated and the combination of Cerrato, Cini and Wang discloses the set of sample electronic renderings is associated with a first rendering quality metric and the electronic rendering is associated with a second rendering quality metric, wherein the second rendering quality metric is different than the first rendering quality metric (Cerrato, ¶0048, “one or more of the graphical features are rendered with a first rendering quality associated with a first layer and one or more of the graphical features are rendered with a second rendering quality associated with a second layer, and the output image generator 230 generates an image for display comprising graphical features allocated to the respective different layers and thus having been rendered with different rendering qualities.”). As to claim 5, claim 1 is incorporated and the combination of Cerrato, Cini and Wang discloses the set of sample electronic renderings is associated with a first rendering quality metric and the electronic rendering is associated with a second rendering quality metric, wherein the second rendering quality metric is higher than the first rendering quality metric (Cerrato, ¶0047, “rendering resources can be used in a targeted manner so that higher quality rendering can be performed for a graphical feature allocated to the mobile layer and lower quality rendering can be performed for a graphical feature allocated to the mobile re-projection layer or static re-projection layer.”). As to claim 6, claim 1 is incorporated and the combination of Cerrato, Cini and Wang discloses each of a plurality of sample electronic renderings included in the set of sample electronic renderings is associated with a different configuration of one or more features and associated feature parameters of the one or more features (Cerrato, ¶0078, “Therefore, features that exhibit no motion relative to the virtual camera (i.e. are fixed with respect to the virtual camera) can be identified and the allocation circuitry can allocate such features to static re-projection layer (or to the mobile re-projection layer if the static re-projection layer is not provided). For example, a graphical feature such as a weapon held by an avatar or helmet worn by an avatar which moves with the avatar may thus exhibit no motion relative to the virtual camera when the virtual camera corresponds to the position of the avatar and such features can be identified as being static and thus allocated accordingly to a layer with a low associated rendering quality.” ¶0080, “when the allocation circuitry 210 determines that the graphical feature is a static graphical feature the allocation circuitry 210 allocates the graphical feature to the static re-projection layer.”). As to claim 7, claim 1 is incorporated and the combination of Cerrato, Cini and Wang discloses the rendering data describes variations between sample electronic renderings included in the set of sample electronic renderings (Cerrato, ¶0041, “a graphical feature such as a helmet worn by an in-game character or a weapon held by an in-game character, may be included in an image frame that is output for display such that a position of the helmet or weapon in the image frame remains unchanged regardless of changes in the position and/or orientation of the viewpoint of the virtual camera. For example, in some first person shooter games, a virtual helmet or other similar structure can be included in an image frame to provide a viewer the impression of viewing the scene as though they are wearing a helmet and as such the position and appearance of the helmet remains unchanged for a series of image frames generated for the virtual camera even though the virtual camera moves with respect to the virtual environment. For virtual environments including such static graphical features, these graphical features can be allocated to the static re-projection layer so that the static graphical features can be rendered with a rendering quality associated with the static re-projection layer. In some examples, a graphical feature allocated to the static re-projection layer may be initially rendered by the rendering circuitry 220 and then subsequently re-projected for each image frame output by the output image generator 230. Alternatively, a graphical feature allocated to the static layer may be rendered at a lower rendering rate and re-projected to obtain a suitable frame rate for the graphical feature allocated to the static re-projection layer. It will be appreciated that depending on the virtual environment, static graphical features may or may not be present in which case the plurality of respective layers may comprise the mobile layer and the mobile re-projection layer without requiring the static re-projection layer.“ ¶0049, “a relatively mobile graphical feature, such as a virtual car in the virtual environment, can be allocated to the mobile layer by the allocation circuitry 210 depending on one or more properties for the virtual car, whereas a relatively less mobile graphical feature such as the sky portion (being relatively less mobile in that the sky portion remains relatively unchanged for a series of image frames or that the sky portion does not move with respect to a reference point), can be allocated to the mobile projection layer. For an image frame including the virtual car (or a portion thereof) and the sky portion, the virtual car and sky portion are each rendered with a rendering quality associated with their corresponding layer, and the output image generator 230 is configured to generate a composite image comprising the virtual car and the sky portion.” ¶0080, “At a step 510 the allocation circuitry 210 obtains data associated with the graphical feature for the virtual environment. At a step 520 the allocation circuitry 210 determines whether the graphical feature is a mobile graphical feature or a static virtual feature with respect to the virtual camera in dependence upon the obtained data.”). As to claim 8, claim 7 is incorporated and the combination of Cerrato, Cini and Wang discloses the variations describe how at least one sample electronic rendering changed in response to a different configuration of one or more features and associated feature parameters of the one or more features (Cerrato, ¶0071, “a mobile feature is allocated to the mobile layer and a static feature is allocated to the static re-projection layer. For the case in which the plurality of layers comprises the mobile layer and the mobile re-projection layer, the allocation circuitry 210 is configured to allocate the graphical object to the mobile layer in dependence upon whether a magnitude of the change in position for the graphical feature is greater than a threshold distance. In this way, graphical features with a higher mobility can be allocated to the mobile layer and graphical features with a lower degree of mobility can be allocated to the mobile re-projection layer.” ¶0079, “A predetermined threshold can be used for comparison with the motion of a graphical feature so as to determine whether to allocate the graphical feature to the mobile layer. For a graphical feature having motion that is greater than or equal to the predetermined threshold, the allocation circuitry 210 accordingly allocates the graphical feature to the mobile layer. For a graphical feature having motion that is less than the predetermined threshold, the allocation circuitry 210 accordingly allocates the graphical feature to another layer other than the mobile layer. In a simplest case using the mobile layer and the mobile re-projection layer, the graphical feature is allocated to either the mobile layer or the mobile re-projection layer in dependence upon an evaluation of the motion of the graphical feature with respect to the predetermined threshold.” ¶0080, “At a step 510 the allocation circuitry 210 obtains data associated with the graphical feature for the virtual environment. At a step 520 the allocation circuitry 210 determines whether the graphical feature is a mobile graphical feature or a static virtual feature with respect to the virtual camera in dependence upon the obtained data.”). As to claim 9, claim 1 is incorporated and the combination of Cerrato, Cini and Wang discloses the rendering data identifies at least one portion of the virtual space that when rendered changes in response to a different configuration of one or more features and associated feature parameters of the one or more features (Cerrato, ¶0086, “the allocation circuitry 210 can reallocate the graphical feature from the mobile layer to another layer having a lower rendering quality responsive to a change in at least one property associated with the graphical feature. In a similar manner, a feature that initially does not move with respect to the virtual environment may be picked up by an avatar thus resulting increased motion for the graphical feature and a reallocation can be performed to allow rendering for the graphical feature using a layer with improved rendering quality.” ¶0089, “the rendering circuitry 220 is configured to render graphical features allocated to respective different layers of the plurality of layers with at least one of a different rendering rate and a different image resolution.” ¶0095, “one or more of an image resolution and a rendering rate can be adjusted for a respective layer according to the gaze direction.”). As to claim 10, claim 1 is incorporated and the combination of Cerrato, Cini and Wang discloses the electronic processor is configured to generate the electronic rendering by identifying at least one portion of the virtual space that when rendered changes in response to one or more changes in the subset of features and associated feature parameters, and re-render the at least one portion of the virtual space as part of the generated electronic rendering (Cerrato, ¶0086, “the allocation circuitry 210 can reallocate the graphical feature from the mobile layer to another layer having a lower rendering quality responsive to a change in at least one property associated with the graphical feature. In a similar manner, a feature that initially does not move with respect to the virtual environment may be picked up by an avatar thus resulting increased motion for the graphical feature and a reallocation can be performed to allow rendering for the graphical feature using a layer with improved rendering quality.” ¶0089, “the rendering circuitry 220 is configured to render graphical features allocated to respective different layers of the plurality of layers with at least one of a different rendering rate and a different image resolution.” ¶0095, “one or more of an image resolution and a rendering rate can be adjusted for a respective layer according to the gaze direction.”). As to claim 11, the combination of Cerrato, Cini and Wang discloses a method comprising: accessing feature data, the feature data including a set of features of a virtual space, wherein each feature included in the set of features is associated with at least one feature parameter, generating a set of sample electronic renderings for the virtual space for identifying a set of feature layers based on the feature data, based on one or more sample electronic renderings of the set of sample electronic renderings, determine a feature-to-feature influence within the set of features, determining rendering data based on the set of sample electronic renderings, wherein the rendering data includes the set of feature layers, wherein each feature layer is associated with at least one feature of the set of features, the set of feature layers representing partial renderings of the virtual space to be stored in a memory for later use in an on-demand rendering, receiving a rendering request for the on-demand rendering, the rendering request indicating a subset of features and associated feature parameters selected from the set of features as selected feature data, the rendering request retrieving, from the memory, the set of feature layers that match the rendering request and correspond to the selected feature data, and in response to receiving the rendering request, generating, using the rendering data and according to the rendering request, an electronic rendering of the virtual space based on the selected feature data, the electronic rendering of the virtual space including: a first part of the virtual space rendered in real time based on unavailability of a pre-rendered feature layer, and a second part of the virtual space rendered based on the retrieved set of feature layers (See claim 1 for detailed analysis.). As to claim 12, claim 11 is incorporated and the combination of Cerrato, Cini and Wang discloses determining, based on the feature-to-feature influence, the set of feature layers organizing the subset of features having a threshold level of the feature-to-feature influence on a same one of the feature layers of the set. (See claim 2 for detailed analysis.). As to claim 13, claim 12 is incorporated and the combination of Cerrato, Cini and Wang discloses the subset of features having feature-to-feature influence includes at least one of an appliance having a reflective surface, a cabinet, and a light fixture (See claim 3 for detailed analysis.). As to claim 14, claim 11 is incorporated and the combination of Cerrato, Cini and Wang discloses the set of sample electronic renderings is associated with a first rendering quality metric and the electronic rendering is associated with a second rendering quality metric, wherein the second rendering quality metric is different than the first rendering quality metric (See claim 4 for detailed analysis.). As to claim 15, claim 11 is incorporated and the combination of Cerrato, Cini and Wang discloses the set of sample electronic renderings is associated with a first rendering quality metric and the electronic rendering is associated with a second rendering quality metric, wherein the second rendering quality metric is higher than the first rendering quality metric (See claim 5 for detailed analysis.). As to claim 16, claim 11 is incorporated and the combination of Cerrato, Cini and Wang discloses each of a plurality of sample electronic renderings included in the set of sample electronic renderings is associated with a different configuration of one or more features and associated feature parameters of the one or more features (See claim 6 for detailed analysis.). As to claim 17, claim 11 is incorporated and the combination of Cerrato, Cini and Wang discloses the rendering data describes variations between sample electronic renderings included in the set of sample electronic renderings, and wherein the variations describe how at least one sample electronic rendering changed in response to a different configuration of one or more features and associated feature parameters of the one or more features (See claim 7-8 for detailed analysis.). As to claim 18, claim 11 is incorporated and the combination of Cerrato, Cini and Wang discloses the rendering data identifies at least one portion of the virtual space that when rendered changes in response to a different configuration of one or more features and associated feature parameters of the one or more features (See claim 9 for detailed analysis.). As to claim 19, claim 11 is incorporated and the combination of Cerrato, Cini and Wang discloses generating the electronic rendering comprises: identifying at least one portion of the virtual space that when rendered changes in response to one or more changes in the subset of features and associated feature parameters, and re-rendering the at least one portion of the virtual space as part of the generated electronic rendering (See claim 10 for detailed analysis.). As to claim 20, the combination of Cerrato, Cini and Wang discloses a non-transitory computer-readable medium storing software instructions that, when executed, cause an apparatus to: access feature data, the feature data including a set of features of a virtual space, wherein each feature included in the set of features is associated with at least one feature parameter, generate a set of sample electronic renderings for the virtual space for identifying a set of feature layers based on the feature data, determine rendering data based on the set of sample electronic renderings, wherein the rendering data includes the set of feature layers, wherein each feature layer is associated with at least one feature of the set of features, based on an analysis of one or more sample electronic renderings of the set of sample electronic renderings, determine a feature-to-feature influence within the set of features, the set of feature layers representing partial renderings of the virtual space to be stored in a memory for later use in an on-demand rendering, receive a rendering request for the on-demand rendering, the rendering request indicating a subset of features and associated feature parameters selected from the set of features as selected feature data, the rendering request retrieving, from the memory, the set of feature layers that match the rendering request and correspond to the selected feature data, and in response to receiving the rendering request, generate, using the rendering data, an electronic rendering of the virtual space based on the selected feature data, the electronic rendering of the virtual space including: a first part of the virtual space rendered in real time based on unavailability of a pre-rendered feature layer, and a second part of the virtual space rendered based on the retrieved set of feature layers (See claim 1 for detailed analysis.). As to claim 21, claim 1 is incorporated and the combination of Cerrato, Cini and Wang discloses the determining the feature-to-feature influence within the set of features based on at least one of: a presence of a reflection between a first feature and a second feature, an alteration of the first feature with respect to the second feature, and a statistical correlation between the first feature to the second feature (Cini, ¶0036, “plurality of attributes 116 may include a lighting attribute, defined an attribute that affects ambient light in interior space. Lighting attribute may include isolated and/or aggregate effects items represented by features 120 have on lighting in internal space; such changes may include, without limitation, changes representing increased or decreased light output from light fixtures, decreased or increased light admitted by windows, addition of shades, frosted glass, or other elements that occlude or reduce light output or transmittance, increases or decreases in reflectiveness of item, increases or decreases in fluorescence or phosphorescence, increases or decreases in opacity or translucence, or the like. Lighting attribute may include a pattern of light and/or shadow cast on visual representation of features 120 and may be modified to reflect change; visual representation of second feature 160 may be changed accordingly as well. Lighting attributes such as light and shadow patterns in three-dimensional model may be tracked and represented as an attribute of three-dimensional model and/or of interior space data structure 112 containing the three-dimensional model” ¶0053, “classifies features 120, attributes, and/or combinations of features 120 and/or attributes to one or more global styles most probably associated with such features 120, attributes, and/or combinations of features 120 and/or attributes.” ¶0057, “Short distances between features 120, attributes, and/or combinations thereof to global style attributes and a cluster may indicate a higher degree of similarity between a feature 120, attribute, and/or combination thereof to global style attributes and a particular cluster. Longer distances between features 120, attributes, and/or combinations thereof to global style attributes and a cluster may indicate a lower degree of similarity between the features 120, attributes, and/or combinations thereof to global style attributes and a particular cluster.” ¶0058, “indicating a high degree of similarity between a feature 120, attribute, and/or combination thereof and a particular data entry cluster representing a global style attribute.” ¶0058, “indicative of greater degrees of similarity. Degree of similarity index values may be compared to a threshold number indicating a minimal degree of relatedness suitable for inclusion of a feature 120, attribute, and/or combination thereof in a cluster, where degree of similarity indices falling under the threshold number may be included as indicative of high degrees of relatedness.” ¶0076, “where first attribute 144 includes a given amount of weight on a wall-mounted feature 120, second attribute 156 may include an associated structural attribute requiring a stud or other supporting element to be present in a wall where the wall-mounted feature 120 is located, which modeling device 104 may use to select second attribute 156. Populating the plurality of attributes 116 may include generating the second attribute 156 as a function of first feature 140; for instance, first feature 140 may be associated in feature attribute table 308 with an infrastructural attribute such as a requirement to be connected to an electrical, gas, and/or water supply, which modeling device 104 may use to select second attribute 156.” ¶0084, “modeling device 104 may designed and configured to select a vector representing a feature 120, combination of features 120, and/or combination of features 120 with attributes, using a loss function by generating a loss function of the plurality of vectors representing features 120, combinations of features 120, and/or combinations of features 120 with attributes and feature 120 goal vector, minimizing the loss function, and selecting a vector from the plurality of vectors representing features 120, combinations of features 120, and/or combinations of features 120 with attributes as a function of minimizing the loss function; a first feature 140 may be selected from and/or presented using a vector selected via minimizing the loss function, for instance by tracking, in memory of modeling device 104, which features 120 and/or attributes were used to generate vectors input to the loss function. A “loss function,” as used herein is an expression of an output of which an optimization algorithm minimizes to generate an optimal result. As a non-limiting example, modeling device 104 may select a feature 120, combination of features 120, and/or combination of features 120 with attributes having an associated vector that minimizes a measure of difference from feature 120 goal vector; measure of difference may include, without limitation, a measure of geometric divergence between vector representing feature 120, combination of features 120, and/or combination of features 120 with attributes and feature 120 goal vector, such as without limitation cosine similarity, or may include any suitable error function measuring any degree of divergence and/or aggregation of degrees of divergence, between attributes of feature 120 goal vector and vector representing feature 120, combination of features 120, and/or combination of features 120 with attributes. Selection of different loss functions may result in identification of different vectors as generating minimal outputs. Alternatively or additionally, each of feature 120 goal vector and each vector representing a feature 120, combination of features 120, and/or combination of features 120 with attributes may be represented by a mathematical expression having the same form as mathematical expression; modeling device 104 may compare the former to the latter using an error function representing average difference between the two mathematical expressions. Error function may, as a non-limiting example, be calculated using the average difference between coefficients corresponding to each variable.” ¶0099, “performed in the form of an interactive decision game that pieces together pre-designed modules, optionally with variations real time changing variables and variations, into established floor plans, site plans or into a blank slate environment.”). Claims 22 are rejected under 35 U.S.C. 103 as being unpatentable over Cerrato et al. (US Pub 2023/0141027 A1) in view of Cini (US Pub 2022/0406027 A1), further in view of Wang et al. (US Pub 2019/0095985 A1) and Kar et al. (US Pub 2020/0228774 A1). As to claim 22, claim 1 is incorporated and the combination of Cerrato, Cini and Wang discloses the first rendering quality metric being associated with low-quality rendering techniques including (Cerrato, ¶0048, “an image for display can be generated by the output image generator 230 based on rendered graphical features for a single layer of the plurality of layers. For example, in the case of an image frame comprising just a sky portion of a virtual environment, the graphical feature corresponding to the sky may be allocated to a respective layer so that the rendering resources associated with that respective layer are used by the rendering circuitry 220 for rendering the sky portion and the output image generator 230 generates the image frame for display. Alternatively, for a more feature rich image frame comprising a number of respective graphical features, one or more of the graphical features are rendered with a first rendering quality associated with a first layer and one or more of the graphical features are rendered with a second rendering quality associated with a second layer, and the output image generator 230 generates an image for display comprising graphical features allocated to the respective different layers and thus having been rendered with different rendering qualities. As such, the output image generator 230 is configured to combine one or more rendered graphical features allocated to a first layer with one or more rendered graphical features allocated to a second layer to generate the image frame.”. ¶0049, “the virtual car and sky portion are each rendered with a rendering quality associated with their corresponding layer,” “using a lower rendering quality for a feature in one layer whilst using a higher rendering quality for another feature in another layer” ¶0090, “an image resolution associated with the mobile layer may be lower than an image resolution associated with the mobile re-projection layer” ¶0095-¶0096, “peripheral parts of the user's vision are observed with lower detail. As such, the allocation circuitry 210 can be configured to increase a rendering quality for a respective layer having a graphical feature corresponding to the gaze direction” Cerrato used gaze tracking to lower the number of reflective interactions and sight-line overlays that needs to be rendered.). Cerrato, Cini and Wang does not disclose low-quality rendering techniques including a rasterization. However, rasterization is just one of the well-known rendering techniques. Kar teaches low-quality rasterization rendering (Kar, ¶0055, “rasterization is a technique by which objects in an image are converted into a set of polygons (e.g., triangles), which can be translated and used to determine pixel values for a newly rendered view. Although such an approach can be less computationally intensive than ray tracing, the computational power required for high-quality rendering is still far beyond that available on devices of limited computing power and does not support real-time rendering of novel viewpoints. Further, rasterization often results in images that are unrealistic and subject to various visual artifacts.”). Kar also suggests a lower number of reflective interaction and sigh-line overlays (Kar, ¶0066, “Third, the system may need only limited computing resources such as computation power or memory to perform viewpoint rendering. Fourth, the system may allow for the rapid rendering of novel viewpoints and/or allow for interactive rendering rates.” “the system may include representations having alpha layers allowing for the capture of partially reflective or transparent objects as well as objects with soft edges. Eleventh, the system may provide for the generation of stackable images from different viewpoints” viewpoint rendering is lower number because it only renders for the viewpoint.). Cerrato, Cini, Wang and Kar are considered to be analogous art because all pertain to image rendering. It would have been obvious before the effective filing date of the claimed invention to have modified Cerrato with the features of “low-quality rasterization rendering.” as taught by Kar. The claim would have been obvious because the technique for improving a particular class of devices was part of the ordinary capabilities of a person of ordinary skill in the art, in view of the teaching of the technique for improvement in other situations. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to YU CHEN whose telephone number is (571)270-7951. The examiner can normally be reached on M-F 8-5 PST Mid-day flex. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached on 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YU CHEN/Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Jul 11, 2023
Application Filed
Apr 07, 2025
Non-Final Rejection — §103
Jun 30, 2025
Response Filed
Jul 05, 2025
Final Rejection — §103
Oct 08, 2025
Applicant Interview (Telephonic)
Oct 08, 2025
Examiner Interview Summary
Oct 09, 2025
Request for Continued Examination
Oct 13, 2025
Response after Non-Final Action
Jan 13, 2026
Non-Final Rejection — §103
Mar 26, 2026
Interview Requested
Apr 01, 2026
Applicant Interview (Telephonic)
Apr 01, 2026
Examiner Interview Summary
Apr 03, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604497
THIN FILM TRANSISTOR AND ARRAY SUBSTRATE
2y 5m to grant Granted Apr 14, 2026
Patent 12597176
IMAGE GENERATOR AND METHOD OF IMAGE GENERATION
2y 5m to grant Granted Apr 07, 2026
Patent 12589481
TOOL ATTRIBUTE MANAGEMENT IN AUTOMATED TOOL CONTROL SYSTEMS
2y 5m to grant Granted Mar 31, 2026
Patent 12588347
DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12586265
LINE DRAWING METHOD, LINE DRAWING APPARATUS, ELECTRONIC DEVICE, AND COMPUTER READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
98%
With Interview (+29.9%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 1052 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month