DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on/after Mar. 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 27 January 2026 has been entered.
Response to Arguments
Applicant's arguments filed 27 Jan. 2026 have been fully considered but they are not persuasive.
Applicant’s arguments regarding the eligibility of the pending claims, see p. 9, filed 27 Jan. 2026, with respect to claims 1-5, 7-18, and 26-27 have been fully considered and are not persuasive.
On p. 9 of Applicant’s remarks, Applicant asserts that “these operations provide improvements for a graphics rendering process, including simplifying and shortening a construction workflow for virtual models, accelerating construction speed [such that a] … computing device can rapidly and accurately achieve user-customized rendering effects for the base model and generate a virtual model of the 3D scene.” The Examiner asserts that the use of a general-purpose computer to execute a mental process is not significantly more than an abstract idea. Applicant asserts that this technique “significantly accelerates a procedure for achieving user-customized image rendering, with reducing computational resource consumption …” The mere assertion of an improvement without sufficient detail, provided in the specification and recited in the claims, does not confer eligibility. The Examiner further asserts that the recitation of “generating … the virtual model of the [3-D] space scene from the basic model automatically” is representative of extra-solution activity, and does not impart eligibility over the recitation of the aforementioned abstract idea.
Applicant’s arguments, see pp. 10-12, filed 27 January 2026, with respect to the rejection(s) of claims 1 and 26-27 under 35 U.S.C. § 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Cini (U.S. PG-PUB 2020/0242849). The previously-cited LE CHEVALIER reference is no longer relied upon in this Office action. Please see the Office action below for further rationale regarding the rejection(s) of the newly-amended claims.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-5, 7-12, 14-18, and 26-27 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The limitations, under their broadest reasonable interpretation, cover mental process(es) (concept(s) performed in a human mind, including as an observation, evaluation, judgment, or opinion). Claim 1 recites a ‘method for building a virtual model of a [3-D] space scene by using a [3-D] design engine of a virtualization application running in a computing device …’, while claim 26 recites an ‘apparatus for building a virtual model of a [3-D] space scene …’ with essentially the same succeeding limitations as claim 1, and claim 27 recites a ‘non-transitory computer-readable storage medium storing computer instructions thereon, wherein when … processor(s) of a computing device execute the computer instructions, the computing device is caused to execute …’ also with essentially the same succeeding limitations as claim 1. This judicial exception (abstract idea) is not integrated into a practical application because the steps do not add meaningful limitations to be considered specifically applied to a particular technological problem to be solved. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the steps of the claimed invention can be done mentally and no additional features in the claims would preclude them from being performed as such.
According to the USPTO guidelines, a claim is directed to non-statutory subject matter if:
STEP 1: the claim does not fall within one of the four statutory categories of invention (process/method, machine, article of manufacture, or composition of matter), or
STEP 2: the claim recites a judicial exception, e.g., an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis:
STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon?
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
Using the two-step inquiry, it is clear that claims 1 and 26-27 are directed to an abstract idea as shown below:
STEP 1: Do the claims fall within one of the statutory categories?
YES. Claim 1 is directed to a ‘method for building a virtual model of a [3-D] space scene’, i.e., a process; claim 26 is directed to an ‘apparatus for building a virtual model of a [3-D] space scene’, i.e., a machine; and claim 27 is directed to a ‘non-transitory computer-readable storage medium’, i.e., an article of manufacture.
STEP 2A (PRONG 1): Are the claims directed to a law of nature, a natural phenomenon, or an abstract idea?
YES, the claims are directed toward an abstract idea (i.e., a mental process).
With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas:
Mathematical concepts – mathematical relationships, mathematical formulas or equations, or mathematical calculations;
Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and
Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, or opinion).
The limitations of the 'method' in claim 1, the 'apparatus' of claim 26, and the 'non-transitory computer-readable storage medium' of claim 27 comprise a mental process that can be practicably performed in the human mind; therefore, claims 1 and 26-27 recite an abstract idea.
Claims 1 and 26-27 recite:
‘… receiving … a configuration of a user for … rendering effect(s) to be presented for the [3-D] space scene’,
‘… acquiring … a basic model of the [3-D] space scene …’,
‘… parsing … the configuration for the … rendering effect(s) to determine a configuration for the basic model’, and
‘… processing … the basic model according to achieve the … rendering effect(s) according to the determined configuration for the basic model’; and
… generating … the virtual model of the [3-D] space scene from the processed basic model automatically.
These limitations, as drafted, recite a simple process that, under their broadest reasonable interpretation, covers performance of the limitations in the mind of a human. The Examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that “can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘Mental processes and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same).
The courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid (e.g., pen and paper or a slide rule) to perform the claim limitation. See, e.g., Benson, 409 U.S. at 67, 65, 175 USPQ at 674-75, 674 (noting that the claimed "conversion of [binary-coded decimal] numerals to pure binary numerals can be done mentally," i.e., "as a person would do it by head and hand."); Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1139, 120 USPQ2d 1473, 1474 (Fed. Cir. 2016) (holding that claims to a mental process of "translating a functional description of a logic circuit into a hardware component description of the logic circuit" are directed to an abstract idea, because the claims "read on an individual performing the claimed steps mentally or with pencil and paper").
Nor do the courts distinguish between claims that recite mental processes performed by humans and claims that recite mental processes performed on a computer. As the Federal Circuit has explained, "courts have examined claims that required the use of a computer and still found that the underlying, patent-ineligible invention could be performed via pen and paper or in a person’s mind." Versata Dev. Group v. SAP Am., Inc., 793 F.3d 1306, 1335, 115 USPQ2d 1681, 1702 (Fed. Cir. 2015). See also Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1318, 120 USPQ2d 1353, 1360 (Fed. Cir. 2016) (‘‘With the exception of generic computer-implemented steps, there is nothing in the claims themselves that foreclose them from being performed by a human, mentally or with pen and paper.’’); Mortgage Grader, Inc. v. First Choice Loan Servs. Inc., 811 F.3d 1314, 1324, 117 USPQ2d 1693, 1699 (Fed. Cir. 2016) (holding that computer-implemented method for "anonymous loan shopping" was an abstract idea because it could be "performed by humans without a computer").
Because both product and process claims may recite a "mental process", the phrase "mental processes" should be understood as referring to the type of abstract idea, and not to the statutory category of the claim. The courts have identified numerous product claims as reciting mental process-type abstract ideas, for instance the product claims to computer systems and computer-readable media in Versata Dev. Group. v. SAP Am., Inc., 793 F.3d 1306, 115 USPQ2d 1681 (Fed. Cir. 2015).
As such, a person could mentally perform a ‘method for building a model of a [3-D] space scene’ of claim 1, and conceptually implement the ‘apparatus for building a model of a [3-D] space scene’ of claim 26 and the ‘non-transitory computer-readable storage medium’ of claim 27 either mentally or using a pen and paper. Even if there were to be a nominal recitation that the various steps are executed by a ‘processor’/in a ‘computer’ (e.g., processing unit), it would not take the limitations out of the mental process grouping. Thus, the claims recite a mental process.
If a claim limitation, under its broadest reasonable interpretation, covers performance of a mental step which could be performed with simple tools such as a pen and paper, then it falls within the “mental steps” grouping of abstract ideas. Accordingly, claims 1 and 26-27 recite an abstract idea.
STEP 2A (PRONG 2): Do the claims recite additional elements that integrate the judicial exception into a practical application?
NO, claims 1 and 26-27 do not recite additional elements that integrate the judicial exception into a practical application.
With regard to STEP 2A (prong 2), whether claims 1 and 26-27 recite additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application:
an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field;
an additional element that applies or uses a judicial exception to affect a particular treatment or prophylaxis for a disease or medical condition;
an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim;
an additional element effects a transformation or reduction of a particular article to a different state or thing; and
an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.
While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application:
an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea;
an additional element adds insignificant extra-solution activity to the judicial exception; and
an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use.
Claims 1 and 26-27 do not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application.
The claims are claimed generically and are operating in their ordinary capacity such that they do not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
With regard to STEP 2B, whether claims 1 and 26-27 recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre-guideline procedure is still in effect. Examiners should continue to consider whether an additional element or combination of elements:
adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or
simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present.
With regard to (STEP 2B), the Guidance provided the following examples of limitations that may be enough to qualify as “significantly more" when recited in a claim with a judicial exception:
Improvement to another technology or technical field
Improvement to functioning of computer itself and/or applying the judicial exception with, or by use of, a particular machine
Effecting a transformation or reduction of a particular article to a different state or thing
Adding a specific limitation other that what is well understood, routine and conventional in the field, or adding unconventional steps that confine the claim to a particular useful application
Meaningful limitation beyond generally linking the use of an abstract idea to a particular technological environment.
The Guidance further set forth limitations that were found not to be enough to qualify as “significantly more” when recited in a claim with a judicial exception include:
Adding words to “apply it” (or an equivalent) with the judicial exception or mere instructions to implement abstract ideas on a computer
Simply appending well-understood, routine, and conventional activities previously known to the industry specified at a high level of generality to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine, and conventional activities previously known to the industry.
Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea
Generally linking the use of the judicial exception to a particular technological environment or field of use.
Claims 1 and 26-27 do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Thus, since claims 1 and 26-27: (a) are directed toward an abstract idea, (b) do not recite additional elements that integrate the judicial exception into a practical application, and (c) do not recite additional elements that amount to significantly more than the judicial exception, claims 1 and 26-27 are not eligible subject matter under 35 U.S.C. § 101. Similar analysis is made for dependent claims 2-18, which are similarly identified as: being directed towards an abstract idea, not reciting additional elements that integrate the judicial exception into a practical application, and not reciting additional elements that amount to significantly more than the judicial exception.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 USC 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4-5, 7, 14-17, and 26-27 are rejected under 35 U.S.C. 103 as being unpatentable over Capozella et al. (U.S. PG-PUB 2017/0060379, 'CAPOZELLA') in view of Cini (U.S. PG-PUB 2020/0242849, 'CINI').
Regarding claim 1, CAPOZELLA discloses a method for building a virtual model of a [3-D] space scene by using a [3-D] design engine of a virtualization application running in a computing device (CAPOZELLA; ¶ 0025; “A model control unit 130 [‘[3-D] design engine’] is coupled to computing system 100 [‘computing device’] via communication link 140. … model control unit 130 and computing system 100 … [are] a single computing system …” ¶ 0026; “Model control unit 130 provides options for users to control, configure and operate an augmentable and spatially manipulable 3D base model image of an equipment specimen [‘building a virtual model of a [3-D] space scene’].” ¶ 0034; “Software 506 includes application 526 [‘virtualization application’] which itself includes one or both of display processes 480, 490.”), comprising:
receiving, by the computing device, a configuration of a user for … rendering effect(s) to be presented for the [3-D] space scene (CAPOZELLA; FIG. 3B; ¶ 0029; “Jam signal 134 is processed to produce supplemental augmenting data that updates augmented model image 152 to create an updated augmented model image 154 in which rotating drum 105 has jammed (as indicated by drum rattling 103 and by smoke 106 in FIG. 3B — in the generated model image 154 on computing system 100, image 154 actually shows the drum 105 shaking or rattling and shows smoke emerging from drum 105 [‘rendering effects to be presented’] in which a jam has occurred).” FIG. 3D; ¶ 0031; “If a user wants to see the equipment specimen without the smoke 106 obscuring part of the view of the machinery, then the user can delete the smoke 106 by selecting that option on computing system 100 [‘receiving a configuration of a user’]. … the user can use display system 110 (e.g., a touchscreen) to touch 160 the smoke component of the augmented model. A menu 162 or other option selection interface is presented that allows the user to select “Delete feature” from menu 162. … Once the smoke deletion selection has been made, the augmented model image is updated again to render image 158 of FIG. 3E.”);
PNG
media_image1.png
363
506
media_image1.png
Greyscale
acquiring, by the [3-D] design engine, a basic model of the [3-D] space scene (CAPOZELLA; FIGS. 1A-1B; ¶ 0021; “Data for generating images on display system 110 can be acquired from a base model data source 120, such as a target image or other data target that provides base model data 122 [‘basic model of the [3-D] space scene’] that is optically … readable and allows for determining and updating the spatial relationship between the base model data source 120 and any data acquisition device … that can move relative to the base model data source 120. Base model data 122 can include edge lines and other optically-readable indicia that provide both compositional data (defining the [3-D] appearance of the equipment specimen, its components and their relative arrangement) and spatial data (defining the distance, perspective and orientation of the equipment specimen relative to an observer) for display system 110 … Other sources of base model image data and types of base model image data can be used in augmentable modeling, including other types of data that permit rendering and updating of spatial data relative to an observer or data-collecting computing system” ¶ 0022; “Computing system 100 also includes an optical data acquisition device 102 (e.g., a camera … that can be … mounted to computing system 100). When optical data acquisition device 102 first acquires the base model data 122, that optical data is processed by computing system 100 to generate a [3-D] base model image 104 … The base model image 104 … is a [3-D] image of a specimen that can be [an] environment …”), wherein the basic model comprises respective [3-D] models of solid objects in the [3-D] space scene (CAPOZELLA; FIGS. 1A-1B; ¶ 0024; “… the equipment specimen includes a rotating drum component 105 … and a warning light tower 108.”), the respective [3-D] models being geometric models without rendering effects (CAPOZELLA; ¶ 0022; “The computing system can also freeze the model/image of the equipment so that it doesn't move as changes are made to it.”);
parsing, by the computing device, the configuration for the … rendering effect(s) to determine a configuration for the basic model (CAPOZELLA; FIG. 4A; ¶ 0032; “The base model data is processed by processing system 401 to generate an initial base model image which can be displayed on display system 410. Display system 410 displays the most current model image, which can be either the base model (if no augmentation has yet occurred) or the most recent augmented model image. Augmenting data is received by processing system 401 [‘parsing the configuration’] … from a model control unit 430. This augmenting data is combined [‘rendering effects’] with the current model image to generate an updated model image [‘determine a configuration for the basic model’] on display system 410. Augmenting data [is] provided via user input at unit 430. Also, augmenting data (e.g., user input) [is] received at the display system 410 (e.g., via touchscreen) or through another user interface of computing system 400, after which display system 410 shows an updated (most current) model. Augmenting data can also be sent to the model control unit 430 for updating any demonstration/training … relating to the model image.”)
processing, by the [3-D] design engine, the basic model to achieve the … rendering effects according to the determined configuration for the basic model (CAPOZELLA; ¶ 0029; “To demonstrate jamming of the rotating drum, … a “drum jam” option on model control unit 130 [‘[3-D] design engine’] … can then be selected, generating a jam signal 134 that can be used internally within model control unit 130 … Jam signal 134 is processed to produce supplemental augmenting data that updates augmented model image 152 to create an updated augmented model image 154 in which rotating drum 105 has jammed [‘processing, by the [3-D] design engine, the basic model to achieve the … rendering effects’] (as indicated by drum rattling 103 and by smoke 106 in FIG. 3B — in the generated model image 154 … [which] actually shows the drum 105 shaking or rattling and shows smoke emerging from drum 105 in which a jam has occurred).”).
CAPOZELLA does not explicitly disclose that the parsing comprises determining a configuration for … attribute parameter(s) of the basic model based on the configuration for the … rendering effect(s) and predefined mapping rules between the configuration for the … rendering effect(s) and the configuration for … attribute parameter(s) of the basic model, which CINI discloses (CINI; ¶ 0017; “… [this] provides a [3-D] modeling system for designs of interior spaces … that permits operators to select, arrange, and modify features of the [3-D] model to reflect potential changes to a corresponding real interior space accurately. Interdependencies may be detected and/or performed using machine learning processes such as k-means clustering algorithms, feature learning, and/or classifiers [‘determining a configuration for … attribute parameter(s)’]; processes may be represented and/or archived using a decision tree data structure. … not only may the appearances of introduced or modified features be portrayed accurately in situ, but effects of features on one another or on … goal(s) of a project may also be represented accurately [‘rendering effect(s)’]; this result [is] enabled by introduction of data structures marrying [3-D] models of spaces and features with data elements representing seen/unseen attributes of such features [‘attribute parameter(s) of the basic model’], as well as rules for interactions of such data elements between data structures that affect in turn rules for rendering a resulting [3-D] model of a space and its contents [‘predefined mapping rules’].”).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method for building a virtual model of a [3-D] space scene of CAPOZELLA to include the determining a configuration for … attribute parameter(s) of the basic model based on the configuration for the … rendering effect(s) and predefined mapping rules between the configuration for the … rendering effect(s) and the configuration for … attribute parameter(s) of the basic model of CINI. The motivation for this modification is to provide a system for rendering and modifying 3-D models for interior design that includes a modeling device for receiving a current design of an interior space, generating a data structure representing the interior space by populating attributes of the data structure, and generating a first 3-D model of a first portion of the interior space based on the current design (CINI; Abstract).
CAPOZELLA-CINI disclose generating, by the [3-D] design engine, the virtual model of the [3-D] space scene from the processed basic model automatically (CAPOZELLA; FIGS. 3A-3E, 4A-4B; ¶ 0033; “FIG. 4B shows a method … 490 in which a computing system acquires base model data (491) and … renders a base model image (492). If augmenting data is received (493), then it is combined with any current model image to generate an augmented model (494). If no augmenting data is received, then a check is made for a change in the spatial data relating to the current model image (495). If spatial data relating to the current model image has changed (e.g., the base model data source is closer, farther away, at a different angle, in a different orientation), then the size, perspective and/or orientation of the current model image (whether a base model image or an augmented model image) is updated (496). Whether or not the spatial data changes, method 490 then returns to checking for augmenting data (493).”).
Independent claim 26 exhibits similar scope and recites similar limitations when compared to independent claim 1; therefore, the same motivation to combine references will be maintained.
Regarding claim 26, CAPOZELLA-CINI disclose an apparatus for building a virtual model of a [3-D] space scene, comprising:
at least one processor (CAPOZELLA; FIG. 5; ¶ 0037; “Processing system 501 loads and executes software 506 from storage system 504. When executed by computing system 500 in general, and processing system 501 in particular, software 506 directs computing system 500 to operate …”); and
a memory coupled to the … processor and configured to store computer instructions, wherein when executed by the … processor (CAPOZELLA; FIG. 5; ¶ 0039; “Storage system 504 may comprise any computer-readable media or storage media readable by processing system 501 and capable of storing software 506.”), the computer instructions cause the apparatus to execute the following operations (CAPOZELLA; FIG. 5; ¶ 0038; “… processing system 501 may comprise a microprocessor … that retrieves and executes software 506 from storage system 504.”) comprising: … ([The remaining limitations are repeated verbatim from those recited in claim 1.]).
Regarding claim 27, CAPOZELLA-CINI disclose a non-transitory computer-readable storage medium storing computer instructions thereon, wherein when … processor(s) of a computing device execute the computer instructions (CAPOZELLA; FIG. 5; ¶ 0039; “Storage system 504 may comprise any computer-readable media or storage media readable by processing system 501 and capable of storing software 506.”), the computing device is caused to execute the following operations comprising: … ([The remaining limitations are repeated nearly verbatim from those recited in claim 1.]).
Regarding claim 2, CAPOZELLA-CINI disclose the method according to claim 1, further comprising:
providing a first configuration interface which comprises an item indicating the configuration for the … rendering effect(s) (CAPOZELLA; FIG. 3D; ¶ 0031; “A menu 162 or other option selection interface is presented that allows the user to select “Delete feature” from menu 162.” [The Examiner notes that a ‘feature’ may be moved or made transparent using ‘menu 162’.]); and
receiving, via the first configuration interface, the configuration of the user for the … rendering effect(s) (CAPOZELLA; FIG. 3D-3E; ¶ 0031; “A “delete smoke” signal 138 is sent via communication link 140 from computing system 100 to model control unit 130 so that information being processed and possibly presented on model control unit 130 can also be updated (again, the control signal 138 can be input at model control unit 130 as well). Once the smoke deletion selection has been made, the augmented model image is updated again to render image 158 …”).
Regarding claim 4, CAPOZELLA-CINI disclose the method according to claim 1, wherein the configuration of the user for the … rendering effect(s) comprises
… picture(s) determined to be applied to the … rendering effect(s) by the user (CAPOZELLA; FIG. 3A-3E; ¶ 0031; “Once the smoke deletion selection has been made, the augmented model image is updated again to render image 158 … Throughout the sequence of FIGS. 3A-3E, movement of optical data acquisition device 102 changes the base model image (by updating the spatial data component of the model image data) and the augmenting data provided via model control unit 130 and/or display system 110 will be adapted to reflect the change in the model's appearance (e.g., size, orientation).”), and
parsing the configuration for the … rendering effect(s) to determine a configuration for the basic model comprises:
determining how to apply the … picture(s) to the basic model according to the configuration for the … rendering effect(s) (CAPOZELLA; FIG. 3A-3E; ¶ 0029-31).
Regarding claim 5, CAPOZELLA-CINI disclose the method according to claim 4, wherein processing the basic model comprises:
performing image processing on the … picture(s) (CAPOZELLA; FIG. 3A; ¶ 0028; “Drum rotation signal 132 is processed to produce augmenting data that then augments base model image 104 to create augmented model image 152 in which drum 105 is shown to rotate (as indicated … by arrow 153—in the generated model image on computing system 100, image 152 actually shows the drum 105 moving as instructed).” FIG. 3B; ¶ 0029; “Jam signal 134 is processed to produce supplemental augmenting data that updates augmented model image 152 to create an updated augmented model image 154 in which rotating drum 105 has jammed (as indicated by drum rattling 103 and by smoke 106 …—in the generated model image 154 …, image 154 actually shows the drum 105 shaking or rattling and shows smoke emerging from drum 105 in which a jam has occurred).”); and
presenting the processed … picture(s) in the basic model (CAPOZELLA; ¶ 31; “Throughout the sequence of FIGS. 3A-3E, movement of optical data acquisition device 102 changes the base model image (by updating the spatial data component of the model image data) and the augmenting data provided via model control unit 130 and/or display system 110 will be adapted to reflect the change in the model's appearance (e.g., size, orientation).”).
Regarding claim 7, CAPOZELLA-CINI disclose the method according to claim 1, wherein the … rendering effect(s) comprise a time-varying dynamic effect (CAPOZELLA; ¶ 0019; “Movement of the target image source and/or the target image capturing device alters the spatial presentation of the base model to which the augmenting data is applied (e.g., by providing updated spatial data that modifies the spatial data originally provided as a component of base model data used to render an image of the base model). … a user can move the base model data acquisition device about the base model data source just as a person standing in the same space as a real equipment specimen could walk around the equipment (and/or move the equipment to view different perspectives of the equipment at different distances/orientations). … movement of the base model data source can change the [3-D] base model's position, size and/or orientation. … the augmented model changes and/or updates dynamically as user interaction and selections are implemented.” [The Examiner asserts that ‘movement’ occurs over time and is therefore ‘time-varying’.]).
Regarding claim 14, CAPOZELLA-CINI disclose the method according to claim 1, further comprising:
acquiring basic data of the [3-D] space scene; and
generating the basic model of the [3-D] space scene based on the basic data (CAPOZELLA; FIG. 1A; ¶ 0022; “Computing system 100 … includes an optical data acquisition device 102 (e.g., a camera or other reader …). When optical data acquisition device 102 first acquires the base model data 122, that optical data is processed … to generate a [3-D] base model image 104 … The base model image 104 … is a [3-D] image of a specimen that can [an] environment …”).
Regarding claim 15, CAPOZELLA-CINI disclose the method according to claim 1, further comprising:
providing a second configuration interface, which comprises a group of adjustable items, wherein each adjustable item indicates a rendering effect to be presented for … component(s) in the generated model of the [3-D] space scene (CAPOZELLA; FIG. 3D; ¶ 0031; “A menu 162 [‘providing a … configuration interface’] or other option selection interface is presented that allows the user to select “Delete feature” from menu 162.”; [The Examiner asserts that ‘menu 162’ has a grouping of adjustments: deletion, moving, changing transparency, which all relate to rendering effect(s) for at least one component in the model; in this case, the smoke cloud may be adjusted in the 3-D scene.]);
receiving, via the second configuration interface, a configuration of the user for at least one adjustable item (CAPOZELLA; FIG. 3D; ¶ 0031; “If a user wants to see the equipment specimen without the smoke 106 obscuring part of the view of the machinery, then the user can delete the smoke 106 by selecting that option on computing system 100. … the user can use display system 110 (e.g., a touchscreen) to touch 160 the smoke component of the augmented model.”);
parsing the configuration of the user for the … adjustable item for at least one component to determine a configuration for the … component (CAPOZELLA; FIG. 3D; ¶ 0031; “Throughout the sequence of FIGS. 3A-3E, movement of optical data acquisition device 102 changes the base model image (by updating the spatial data component of the model image data) and the augmenting data provided via model control unit 130 and/or display system 110 will be adapted to reflect the change in the model's appearance (e.g., size, orientation).”); and
adjusting the … component according to the determined configuration for the … component (CAPOZELLA; FIG. 3D; ¶ 0031; “A “delete smoke” signal 138 is sent … from computing system 100 to model control unit 130 so that information being processed and possibly presented on model control unit 130 can also be updated (again, the control signal 138 can be input at model control unit 130 as well). Once the smoke deletion selection has been made, the augmented model image is updated again to render image 158 of FIG. 3E.”).
Regarding claim 16, CAPOZELLA-CINI disclose the method according to claim 1, further comprising:
providing a third configuration interface which comprises a group of adjustable items, wherein each adjustable item indicates a scene effect which can be used for … component(s) in the generated model of the [3-D] space scene (CAPOZELLA; FIG. 3D; ¶ 0031; [The Examiner asserts that ‘menu 162’ constitutes a ‘configuration interface’ which provides items for adjustment of … rendering/scene effect(s), namely ‘smoke 106.’]);
receiving, via the third configuration interface, a configuration of the user for at least one plug-in item of at least one component of the … component(s) (CAPOZELLA; FIG. 3D; ¶ 31; [The Examiner asserts ‘smoke 106’ is analogous to a ‘plug-in item’ as interpreted from ¶ [90] of the instant specification. ‘Smoke 106’ is plugged-in or augmented onto the existing object in the scene.]); and
applying a corresponding scene effect to the … component according to the configuration of the user for the … plug-in item (CAPOZELLA; FIGS. 3B-3D; ¶ 0029; “To demonstrate jamming of the rotating drum, … a “drum jam” option on model control unit 130 … can then be selected, generating a jam signal 134 that can be used internally within model control unit 130 … Jam signal 134 is processed to produce supplemental augmenting data that updates augmented model image 152 to create an updated augmented model image 154 in which rotating drum 105 has jammed (as indicated by drum rattling 103 and by smoke 106 … — in the generated model image 154 …, image 154 actually shows the drum 105 shaking or rattling and shows smoke emerging from drum 105 in which a jam has occurred).”).
Regarding claim 17, CAPOZELLA-CINI disclose the method according to claim 1, further comprising:
transmitting the generated model of the [3-D] space scene to an associated server (CAPOZELLA; FIGS. 1A-1B, 2; ¶ 0025; “A model control unit 130 is coupled to computing system 100 via communication link 140. … model control unit 130 and computing system 100 … can be considered a single computing system implementing processes and methods described herein. Link 140 can be a single element or component, or it can be composed of multiple segments, devices, etc. that provide for appropriate signal processing, communication bridging and the like between model control unit 130 and computing system 100 [‘transmitting’]. The communication link 140 can connect local and/or remote model control units and can permit two-way communication between the model control unit 130 and the computing system 100 [‘transmitting’]. … using model control units 130 that are providing demonstrations and/or other interactive activity, … communications between computing system 100 and any model control units 130 can utilize a more specific communication link. When coupled to computing system 100, the signals of … model control units 130 can be fed via Ethernet connections 142 …” ¶ 0026; “Model control unit 130 provides options for users to control, configure and operate an augmentable and spatially-manipulable 3D base model image … Operational … selections implemented by users of such demonstration equipment generate data communicated (either directly or after suitable processing) to the computing system 100 to generate augmenting data (e.g., where augmenting data can … include spatial changes (moving a camera or mobile device that is used to receive base model data), and operational changes (user inputs to change equipment operation)) that can be combined with the 3D base model image 104 to illustrate how an equipment specimen actually operates in a real-world environment.”).
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over CAPOZELLA in view of CINI as applied to claim 1 above, and further in view of Gandikota et al. (U.S. PG-PUB 2010/0013833, 'GANDIKOTA').
Regarding claim 3, CAPOZELLA-CINI disclose the method according to claim 1; however, CAPOZELLA-CINI do not disclose that the method according to claim 1 further comprises the following limitations, which are disclosed by GANDIKOTA:
maintaining a group of configuration file templates, wherein a configuration file template comprises a configuration rule for the … rendering effect(s) to be presented for the [3-D] space scene (GANDIKOTA; FIG. 5; ¶ 31; “… modifying geometric relationships in a solid model representation that is manipulated [with] software instructions for design … The system accesses a data file [‘maintaining a group of configuration file templates’] defining a geometric model (Step 500). The system converts the data file definitions into a visual representation of the geometric model [‘configuration rule for the … rendering effect(s)’], wherein the visual representation is in a boundary representation format (Step 505). The system displays the visual representation of the geometric model to a user (Step 510).”);
receiving a setting of the user for configuration parameters in a given configuration file template of the group of configuration file templates (GANDIKOTA; FIG. 5; ¶ 0031; “The system identifies an edit feature for modification on a body of the geometric model (Step 515). The system calculates a modified geometric model with the modified edit feature to display to the user (Step 520). The system displays the modified geometric modeler to the user (Step 525).”);
generating, based on the setting of the user for the configuration parameters in the given configuration file template, a configuration file indicating the configuration of the user for the … rendering effect(s) (GANDIKOTA; FIG. 5; ¶ 0031; “In Step 520, the system … creates a mapping for a plurality of faces from the edit feature to a new edit feature; applied the new edit feature to the original body, wherein the new edit feature is remapped to a new body and the new body is modified; and integrates the new feature with the modified geometric model.”); and
determining the configuration for the basic model by parsing the configuration file (GANDIKOTA; FIG. 6; ¶ 0032; “Once the interaction has been created, … the variational modeling toolkit 405 handles the modification computations by way of the variational modeling toolkit API 615”; FIG. 7; ¶ 0033; “… a designer 700, e.g., the user, accesses an application 705, e.g., the solid modeling application 605, to modify a solid model. The application 705 access a solid model [DB] 710 … to access the solid model for modification determined by the designer 700. The solid model [DB] 710 returns the data files 610 corresponding to the designer's request that are loaded by the application 705 [which] then loads the solid model for display to the designer 700, at which time the designer 700 intends to modify some portion of the solid model … The application 705 then creates an interaction object 715 with the variational modeling toolkit 405 … to express a particular model state of the solid model. The interaction object 715 is returned as a tag that is supplied for all subsequent calls on the interaction data structure functions …”).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method according to claim 1 of CAPOZELLA-CINI to include the various features of GANDIKOTA. The motivation for this modification is to use a file-based structure to systematically track and implement changes from a user to a 3-D modelling environment.
Claims 8-12 are rejected under 35 U.S.C. 103 as being unpatentable over CAPOZELLA in view of CINI as applied to claim 1 above, and further in view of Kunath et al. (U.S. PG-PUB 2012/0232787, 'KUNATH') and GANDIKOTA.
Regarding claim 8, CAPOZELLA-CINI disclose the method according to claim 1, further comprising:
receiving a selection of the user for at least one component in the model of the [3-D] space scene (CAPOZELLA; FIG. 3D, ‘menu 162’; ¶ 0031; “If a user wants to see the equipment specimen without the smoke 106 obscuring part of the view of the machinery, then the user can delete the smoke 106 by selecting that option on computing system 100.”);
providing a fourth configuration interface which comprises a group of event items (CAPOZELLA; FIG. 3D, ‘menu 162’; ¶ 0031); however, CAPOZELLA-CINI do not explicitly disclose that each event item indicates an event that can be presented at the … component, which KUNATH discloses (KUNATH; ¶ 0040; “… the navigation device 102 … may … receive an input from a user of the navigation system 100 [‘providing a … configuration interface’] identifying … conditions or criteria [‘group of event items’] to be considered by the navigation device 102 in generating, determining, identifying, or … calculating a route. A user may specify preferences or cost values for use by the navigation device 102. The user may select which conditions or criteria may contribute to the calculation. The user may prioritize particular criteria such that high-priority criteria may contribute more to the calculation of the route than the low-priority criteria. … a user may indicate a top preference for a shady route [The Examiner asserts that the condition of ‘shadiness’ is an ‘event’ determined by the placement of the sun and the presence of clouds, mountains, etc. relative to the location of the user.] …”); nor do CAPOZELLA-CINI explicitly disclose:
receiving, via the fourth configuration interface, a selection of the user for at least one event item of the … event item(s), which KUNATH also discloses (KUNATH; ¶ 0040; “The user may select which conditions or criteria may contribute to the calculation. The user may prioritize particular criteria such that high-priority criteria may contribute more to the calculation of the route than the low-priority criteria. … a user may indicate a top preference for a shady route …”).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method according to claim 1 of CAPOZELLA-CINI to include the disclosure that each event item indicates an event that can be presented at the component and the receiving, via the … configuration interface, a selection of the user for one event item of the event items of KUNATH. The motivation for this modification is to provide a hyper-realistic, graphical simulation of a three-dimensional environment by using shadowing effects based on topography, climate, weather, solar activity, etc.
CAPOZELLA-CINI-KUNATH do not explicitly disclose generating an event toolkit describing an event indicated by the selected at least one event item for the component, by using a domain-specific language, which GANDIKOTA discloses (GANDIKOTA; FIGS. 4A-4B; ¶ 0029; “The software application 400 may be in the form of a solid modeling application such as the aforementioned CAD application 205, the CAE application 210 or CAM application 215. … the software application 400 is provided … with particular API ("application programming interface" call features [‘domain-specific language’] for access and utilization). … as the user interacts with the software application 400, certain modification events trigger interaction with a variational modeling toolkit 405 … The software application 400 and the variational modeling toolkit 405 … utilize the logic processing module 308 in the method described by instructions provided by the method processing module 309 to call a low-level geometric modeling kernel to accomplish the certain modification events of the solid model according to the commands selected by the user and executed by the software application 400, as generally understood in the art of solid modeling … The low-level geometric modeling kernel is commonly a collection of at least a … (3D) geometric modeler 410 … and a collection of geometric software component libraries 415 …”).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method according to claim 1 of CAPOZELLA-CINI-KUNATH to include the generating an event toolkit describing an event indicated by the selected event item for the component, by using a domain-specific language of GANDIKOTA. The motivation for this modification is to use a cross-platform API to generate a 3-D model to enable a software programmer to use any coding language or platform of his/her choosing.
Regarding claim 9, CAPOZELLA-CINI-KUNATH-GANDIKOTA disclose the method according to claim 8, further comprising:
providing a fifth configuration interface (GANDIKOTA; FIG. 3; ¶ 0026; “The computer 300 typically includes a user interface adapter 315, which connects the [CPU] 305 via the bus 310 to … interface devices, such as a keyboard 320, mouse 325, and/or other interface devices 330, which can be any user interface device, such as a touch sensitive screen, digitized pen entry pad, etc.”) which comprises an option indicating … interaction control(s) and an identity list indicating … event(s) (GANDIKOTA; FIG. 6; [The Examiner notes that ‘modeling toolkit 405’ is depicted as containing both a ‘modeling toolkit API 615’ as well as ‘Interaction 1’ and ‘Interaction 2’. The Examiner regards the ‘Interactions 1/2’ as an ‘identity list indicating … events’]) described by the event toolkit (GANDIKOTA; ¶ 0029; “… the user interacts with the software application 400, certain modification events trigger interaction with a variational modeling toolkit 405”);
receiving, via the fifth configuration interface, a selection input by the user for an interaction control of the … interaction control(s) and a selection input by the user for an identity in the identity list indicating the … event(s) (GANDIKOTA; FIG. 7; ¶ 0035; “[At] the intent state 720, the core of the variational modeling toolkit 405 interaction mode involves selection and recognition. The user selects the [face, edge, and/or vertex (FEV)] set to be changed … Where inter-instance relations are supplied, … there is a way to communicate whether the Face, Edge, or Vertex instance is to move [‘identity in the identity list indicating the … events’] …”); and
configuring the selected interaction control for triggering an event associated with the selected identity (GANDIKOTA; ¶ 0029; “… the user interacts with the software application 400, certain modification events trigger interaction with a variational modeling toolkit 405”).
Regarding claim 10, CAPOZELLA-CINI-KUNATH-GANDIKOTA disclose the method according to claim 9, further comprising:
providing a sixth configuration interface (CAPOZELLA; FIG. 3D, ‘menu 162’) which comprises an item indicating … data source(s) (CAPOZELLA; FIG. 3D, ‘base model data 122’) in an upper layer application of the virtual model (CAPOZELLA; FIG. 3D, ‘smoke 106’) of the [3-D] space scene (CAPOZELLA; FIG. 3D, ‘augmented model image 156’);
receiving, via the sixth configuration interface, a selection of the user for … data source(s) (CAPOZELLA; ¶ 0028; “A user selects a “drum rotation” option on model control unit 130, which generates a drum rotation signal 132 …”) of the … data source(s); and
binding the selected … data source(s) to the … event(s) described by the event toolkit, so that the … event(s) is/are to be triggered by using the selected … data source(s) (GANDIKOTA; ¶ 0029; “… as the user interacts with the software application 400, certain modification events trigger interaction with a variational modeling toolkit 405 [‘binding the selected … data source(s) to the … event(s)’] … The software application 400 and the variational modeling toolkit 405 together or individually utilize the logic processing module 308 in the method described by instructions provided by the method processing module 309 to call a low-level geometric modeling kernel to accomplish the certain modification events of the solid model according to the commands selected by the user and executed by the software application 400, as generally understood in the art of solid modeling”).
Regarding claim 11, CAPOZELLA-CINI-KUNATH-GANDIKOTA disclose the method according to claim 10, further comprising:
generating a toolkit describing the binding, by using a domain-specific language (GANDIKOTA; FIGS. 4A-4B; ¶ 29; “The software application 400 [is] a solid modeling application such as … CAD application 205, the CAE application 210 or CAM application 215. … software application 400 is provided … with particular API ("application programming interface" call features [‘domain-specific language’] for access and utilization). … as the user interacts with the software application 400, … modification events trigger interaction [‘binding’] with a variational modeling toolkit 405 … The software application 400 and the variational modeling toolkit 405 … utilize the logic processing module 308 in the method described by instructions provided by the method processing module 309 to call a low-level geometric modeling kernel to accomplish the certain modification events of the solid model according to the commands selected by the user and executed by the software application 400, as generally understood in the art of solid modeling … The low-level geometric modeling kernel is commonly a collection of at least a … (3D) geometric modeler 410 … and a collection of geometric software component libraries 415 …”).
Regarding claim 12, CAPOZELLA-CINI-KUNATH-GANDIKOTA disclose the method according to claim 8, wherein the event toolkit and the toolkit describing binding are generated by using a cross-platform (GANDIKOTA; ¶ 0029; “… as the user interacts with the software application 400, certain modification events trigger interaction [‘binding’] with a variational modeling toolkit 405 [‘event toolkit’] … The low-level geometric modeling kernel is commonly a collection of at least a … (3D) geometric modeler 410 like Parasolid … [one ‘platform’] and a collection of geometric software component libraries 415 like the 3D DCM (or "DCM" product offered by Siemens Product Lifecycle Management Software Inc. [another ‘platform’; therefore, ‘cross-platform’]”) visualization configurator (GANDIKOTA; FIG. 5; ¶ 0031; “The system accesses a data file defining a geometric model (Step 500). The system converts the data file definitions into a visual representation of the geometric model, wherein the visual representation is in a boundary representation format (Step 505). The system displays the visual representation of the geometric model to a user (Step 510).”).
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over CAPOZELLA in view of CINI as applied to claim 17 above, and further in view of Roimela (US PG-PUB 2015/0206337, 'ROIMELA').
Regarding claim 18, CAPOZELLA-CINI disclose the method according to claim 17, further comprising:
rendering the model of the [3-D] space scene in the server (CAPOZELLA; FIG. 1A; ¶ 0021; “Display system 110 typically receives data to display from computing system 100, and may be integrated within computing system 100, such as in a … tablet or smartphone, or may be separate from computing system 100, including geographical separation over a communication network. Data for generating images on display system 110 can be acquired from a base model data source 120, such as a target image … that provides base model data 122 that is optically or otherwise readable and allows for determining and updating the spatial relationship between the base model data source 120 and any data acquisition device (optical or otherwise) that can move relative to the base model data source 120.”).
CAPOZELLA-CINI do not explicitly disclose forming, from pictures of the rendered model of the [3-D] space scene, a video stream, which ROIMELA discloses (ROIMELA; FIG. 7; ¶ 0097; “In step 705, wherein the … image includes [a] video, the MP module 113 may determine [a] [3-D] motion track for the … video, wherein the … confidence value, the rendering of the … pixel onto the … rendered [3-D] map is based, at least in part, on the … [3-D] motion track. … a media content associated with a POI may be a video clip that is to be used in overlapping onto another media content (e.g., an image) or model of the POI in a rendering application. … a user [has] a video of a certain city center, which he … views as rendered in a 3D map application. … a user device where the application is to rendered, may process the video clip and/or its metadata for determining a [3-D] motion track for the video clip. … a confidence value or the rendering of … pixel(s) onto a rendered [3-D] map may be based on the [3-D] motion track. … as … an application at a user device may interact and move the rendering of the video clip in a virtual presentation in a 3D map application, the confidence value of rendering of pixels of the video clip [is] updated based on the 3D motion track.”) and accessibility through a network resource location identity (ROIMELA; FIG. 1: ‘communication network 111’, ‘content database 119a-119n’, ‘content providers 107a-107n’; ¶ 0062-66).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method according to claim 17 of CAPOZELLA-CINI to include the forming, from pictures of the rendered model of the [3-D] space scene, a video stream which is accessible through a network resource location identity of ROIMELA. The motivation for this modification is to provide a modality to explore a 3-D environment in real-time using videographic sequences to display a scene from an arbitrary perspective which may change over time.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN M COFINO whose telephone number is (303) 297-4268. The examiner can normally be reached Monday-Friday 10A-4P MT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at 571-272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JONATHAN M COFINO/Examiner, Art Unit 2614
/KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614