DETAILED ACTION
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on July 31st, 2025 has been entered.
This action is in response to the amendments filed on June 30th, 2025. A summary of this action:
Claims 1-20 have been presented for examination.
Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of both a mental process and certain methods of organizing human activity without significantly more.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Elordi et al., “VIRTUAL REALITY INTERFACES APPLIED TO WEB-BASED 3D E-COMMERCE” in view of Behzadan, Amir H. ARVISCOPE: Georeferenced visualization of dynamic construction processes in three-dimensional outdoor augmented reality. Diss. University of Michigan, 2008.
This action is non-final
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments/Amendments
Regarding priority
Denial of priority claim maintained.
The cited portions, and other portions pointed to, do not provide sufficient written description of the instant particular ordered combination of features recited in these claims.
See MPEP § 2163.02: “The courts have described the essential question to be addressed in a description requirement issue in a variety of ways. An objective standard for determining compliance with the written description requirement is, "does the description clearly allow persons of ordinary skill in the art to recognize that he or she invented what is claimed." In re Gosteli, 872 F.2d 1008, 1012, 10 USPQ2d 1614, 1618 (Fed. Cir. 1989)… The test for sufficiency of support in a parent application is whether the disclosure of the application relied upon "reasonably conveys to the artisan that the inventor had possession at that time of the later claimed subject matter." Ralston Purina Co. v. Far-Mar-Co., Inc., 772 F.2d 1570, 1575, 227 USPQ 177, 179 (Fed. Cir. 1985) (quoting In re Kaslow, 707 F.2d 1366, 1375, 217 USPQ 1089, 1096 (Fed. Cir. 1983))… Lockwood v. Am. Airlines, Inc., 107 F.3d 1565, 1572, 41 USPQ2d 1961, 1966 (Fed. Cir. 1997). Possession may be shown in a variety of ways including description of an actual reduction to practice, or by showing that the invention was "ready for patenting" such as by the disclosure of drawings or structural chemical formulas that show that the invention was complete, or by describing distinguishing identifying characteristics sufficient to show that the inventor was in possession of the claimed invention.”
MPEP § 2163 (II)(A): “To make a prima facie case, it is necessary to identify the claim limitations that are not adequately supported, and explain why the claim is not fully supported by the disclosure. For example, in Hyatt v. Dudas, 492 F.3d 1365, 1371, 83 USPQ2d 1373, 1376-1377 (Fed. Cir. 2007), the examiner made a prima facie case by clearly and specifically explaining why applicant’s specification did not support the particular claimed combination of elements, even though applicant’s specification listed each and every element in the claimed combination. The court found the "examiner was explicit that while each element may be individually described in the specification, the deficiency was lack of adequate description of their combination" and, thus, "[t]he burden was then properly shifted to [inventor] to cite to the examiner where adequate written description could be found or to make an amendment to address the deficiency." Id.; see also Stored Value Solutions, Inc. v. Card Activation Techs., 499 Fed.App’x 5, 13-14 (Fed. Cir. 2012) (non-precedential) (Finding inadequate written support for claims drawn to a method of processing debit purchase transactions requiring three separate authorization codes because "the written description [did] not contain a method that include[d] all three codes" and "[e]ach authorization code is an important claim limitation, and the presence of multiple authorization codes in [the claim] was essential".)”
Furthermore, the provisional specification is silent with respect to some of the features.
MPEP § 2173.05(i): “Ex parte Parks, 30 USPQ2d 1234, 1236 (Bd. Pat. App. & Inter. 1993). "Rather, as with positive limitations, the disclosure must only 'reasonably convey[] to those skilled in the art that the inventor had possession of the claimed subject matter as of the filing date.' ... While silence will not generally suffice to support a negative claim limitation, there may be circumstances in which it can be established that a skilled artisan would understand a negative limitation to necessarily be present in a disclosure." Novartis Pharms. Corp. v. Accord Healthcare, Inc., 38 F.4th 1013, 2022 USPQ2d 569 (Fed. Cir. 2022)
Regarding the 112 Rejection
Maintained, updated as necessitated by amendment. No remarks outside of amendments.
The Examiner further notes that these amendment indicate attempts to correct § 112(b) issues that are rooted in both the original claims and the original disclosure.
Regarding the § 101 Rejection
Maintained, updated below as necessitated by amendment.
With respect to the remarks, at prong 1, only conclusory remarks were furnished and thus do not address the detailed rationale of the rejection.
At prong 2, remarks are missing the point of the historical evidence.
See MPEP § 2106.04(a)(2)(II)(A): “Bilski v. Kappos, 561 U.S. 593, 611, 95 USPQ2d 1001, 1010 (2010) (claims to the concept of hedging are a "fundamental economic practice long prevalent in our system of commerce and taught in any introductory finance class.")”
To clarify, see the opinion: “”Hedging is a fundamental economic practice long prevalent in our system of commerce and taught in any introductory finance class.” 545 F. 3d, at 1013 (Rader, J., dissenting); see, e. g., D. Chorafas, Introduction to Derivative Financial Instruments 75–94 (2008); C. Stickney, R. Weil, K. Schipper, & J. Francis, Financial Accounting: An Introduction to Concepts, Methods, and Uses 581–582 (13th ed. 2010); S. Ross, R. Westerfield, & B. Jordan, Fundamentals of Corporate Finance 743–744 (8th ed. 2008).”
While evidence is not required for the prong one analysis (MPEP § 2106.07(a)(III)), SCOTUS in Bilski cited evidence at step one to support its conclusion that the claims were directed to the abstract idea itself.
In the Examiner’s case, it was pointing out a fundamental legal practice long prevalent since at least the days of Thomas Jefferson, and foundational to real-estate law in the United States of America (e.g. it is in the state code of most states, like Virginia, as discussed in the rejection below).
At prong 2, the Examiner respectfully disagrees, because the claims simply take an abstract idea but do it in a computer environment with a “programming language” not limited to any particular means on how to do in on the computer. ¶ 101: “The structural programming language may include, e.g., a markup language, object-orient programming language, scripting language, among others or any combination thereof.” - to be clear, XML is a mark-up language (that is the “ML”) so see Intellectual Ventures I v. Capital One Fin. Corp., 850 F.3d 1332, 121 USPQ2d 1940 (Fed. Cir. 2017) as discussed in MPEP § 2106.05(f): “The claims were found to be directed to the abstract idea of "collecting, displaying, and manipulating data." 850 F.3d at 1340; 121 USPQ2d at 1946. In addition to the abstract idea, the claims also recited the additional element of modifying the underlying XML document in response to modifications made in the dynamic document. 850 F.3d at 1342; 121 USPQ2d at 1947-48. Although the claims purported to modify the underlying XML document in response to modifications made in the dynamic document, nothing in the claims indicated what specific steps were undertaken other than merely using the abstract idea in the context of XML documents.”
To clarify, there is not technological problem in this disclosure for the hierarchical coordinate systems, but rather a problem that has long ago been solved in the fields of cartography and surveying, i.e. how to have multiple coordinate systems mapped to each other, so that something small in one system (e.g. a townhome in Alexandria VA on the world map is invisible, and Alexandria VA is but a tiny dot if even at all marked) to a larger size when a person looks at the property boundary maps, and to an even larger size when looks at the architectural drawings (e.g. ones submitted for code compliance).
All of these coordinate’s systems are also readily referenced to each other, e.g. Washington DC has Congress as the origin (see the numbering of streets), and boundary markers for the boundary lines dispersed throughout VA and the border of DC/MD, thus enable relative positioning on all coordinate systems at the same time, while keeping the actual coordinates and dimensions on each separate grid unmodified.
Thus, the claim is not directed to an improvement of technology, but rather to the abstract idea itself. The claim places no requirements on how a computer would implement the hierarchical coordinate system in a particular manner on the computer itself, e.g. a particular data structure for an improved manner of storing the information of the coordinate system, nor does the specification convey any such improvement to technology.
MPEP § 2106.05(I): “An inventive concept "cannot be furnished by the unpatentable law of nature (or natural phenomenon or abstract idea) itself." Genetic Techs. Ltd. v. Merial LLC, 818 F.3d 1369, 1376, 118 USPQ2d 1541, 1546 (Fed. Cir. 2016).”
Furthermore, these remarks then point to the evidence of record, but as stated in MPEP § 2106.04(d)(1): “Specifically, the "improvements" analysis in Step 2A determines whether the claim pertains to an improvement to the functioning of a computer or to another technology without reference to what is well-understood, routine, conventional activity.” Then see MPEP § 2106.06(b): “If the claims are a "close call" such that it is unclear whether the claims improve technology or computer functionality, a full eligibility analysis should be performed to determine eligibility. See BASCOM Global Internet v. AT&T Mobility LLC, 827 F.3d 1341, 1349, 119 USPQ2d 1236, 1241 (Fed Cir. 2016). Only when the claims clearly improve technology or computer functionality, or otherwise have self-evident eligibility, should the streamlined analysis be used. For example, because the claims in BASCOM described the concept of filtering content, which is a method of organizing human behavior previously found to be abstract, the Federal Circuit considered them to present a "close call" in the first step of the Alice/Mayo test (Step 2A), and thus proceeded to the second step of the Alice/Mayo test (Step 2B) to determine their eligibility. Id. Although the Federal Circuit held these claims eligible at Step 2B (Pathway C) because they presented a "technology-based solution" of filtering content on the Internet that overcame the disadvantages of prior art filtering systems and that amounted to significantly more than the recited abstract idea, it also would be reasonable for an examiner to have found these claims eligible at Pathway A or B if the examiner had considered the technology-based solution to be an improvement to computer functionality.” – the Examiner noting “A” is at step one, prong 2.
And, as per the evidence of record, this is a purely conventional idea routinely and commonly used in computer modeling, readily known in the common knowledge of POSTIA for when they are construing the claims (clarification below). And numerous other places (e.g. real estate law), since long before a computer, for it is an ineligible abstract idea.
At 2B, these is merely a general conclusory statement and does not address the rationale of record and its detailed analysis.
Regarding the § 102/103 Rejection
Maintained, updated as necessitated by amendment.
The remarks are addressing the newly added subject matter – so see below for how the combination of references teaches this claimed invention as presently amended.
With respect to the remarks regarding Behzadan, this is a piecemeal attack on Behzadan alone, and does not address what it would fairly suggest for POSITA under § 103 when Elordi, the primary reference, was taken in view of Behzadan, as per the rejection.
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 119(e) as follows:
The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994).
The disclosure of the prior-filed application, Application No. 63591840, fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application.
Claims 1-20 are not sufficiently described in the ‘840 application.
To concisely describe some of the below issues, the Feb. 14th, 2025 remarks point to (in response to the § 101 rejection) ¶ 40 in the instant disclosure for its description of a “hierarchical coordinate system” and then alleges that this is reflected in the present claims by several particular recitations. The ‘840 provision application has no mention of such a hierarchical coordinate system (e.g. ¶ 132).
To further clarify, by a listing of examples of the issues:
For example, claim 1 recites: “generating, by the at least one processor, at least one digital object in the at least one simulated area corresponding to the at least one external integration, the at least one digital object comprising at least one computer instruction configured to integrate the at least one external software function into the at least one simulated area;” – however, the only mention of a digital object in the ‘840 is in ¶ 126; there is no recitation of an “external integration” let alone used in this particular combination (MPEP § 2163(II)(A) for Hyatt v. Dudas, 2007).
For a second example, claim 1 recites a “mapping, by the at least one processor, at least one area defined within the at least one digital layout to at least one simulated area in the virtual space based at least in part on a virtual space 3D coordinate system so as to create a simulated space comprising at least one simulated layout corresponding to the at least one area of the planned space; wherein the virtual space 3D coordinate system is mapped to at least one position in the at least one virtual environment 3D coordinate system to locate the simulated space in the at least one virtual environment;;” – however the only mention of a coordinate system in the ‘840 is in ¶ 132, and the mapping step is generically described at a high level in ¶¶ 67, 77, 79, and 100 – i.e. this particular presently claimed combination is not sufficiently described.
A similar issue exists in claim 1 for the “dynamically resize” feature, as this is simply not described in the ‘840 in particular combination.
Claim 11 has similar issues. The dependent claims inherit these issues, and add in their own issues under § 112(a), e.g. the term “cash register” such as used in claims 2 and 3 is not recited in the ‘840 application, let alone in the particular manner they are being used in the present claims, similar for the “chat bot” of claim 3, etc.
This is a non-exhaustive list of features presently claimed invention that are not sufficiently described in the ‘840 application.
Claim Rejections - 35 USC § 112(a)
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Dependent claims inherit the deficiencies of the claims they depend upon.
See MPEP § 2163.02(II)(A): “To make a prima facie case, it is necessary to identify the claim limitations that are not adequately supported, and explain why the claim is not fully supported by the disclosure. For example, in Hyatt v. Dudas, 492 F.3d 1365, 1371, 83 USPQ2d 1373, 1376-1377 (Fed. Cir. 2007), the examiner made a prima facie case by clearly and specifically explaining why applicant’s specification did not support the particular claimed combination of elements, even though applicant’s specification listed each and every element in the claimed combination. The court found the "examiner was explicit that while each element may be individually described in the specification, the deficiency was lack of adequate description of their combination" and, thus, "[t]he burden was then properly shifted to [inventor] to cite to the examiner where adequate written description could be found or to make an amendment to address the deficiency."”
See MPEP §2163.02: “An applicant shows that the inventor was in possession of the claimed invention by describing the claimed invention with all of its limitations using such descriptive means as words, structures, figures, diagrams, and formulas that fully set forth the claimed invention. Lockwood v. Am. Airlines, Inc., 107 F.3d 1565, 1572, 41 USPQ2d 1961, 1966 (Fed. Cir. 1997).”
Representative claim 1 recites:
receiving, by at least one processor, a simulation request comprising identifying a space to be simulated in at least one virtual environment having at least one virtual environment 3D coordinate system;
¶ 5: “receiving, by at least one processor, a simulation request including identifying a place to be simulated in a virtual environment;”
¶ 9: “wherein the plurality of graphical layers are configured to be dynamically rendered based at least in part on at least one computing so as to visually mix at least one portion of the virtual space with a physical view of the place.”
¶ 39: “Rather, the online world can be used to create deeper experiences in conjunction with the real-world through virtual simulations of real and imagined places that can augment real-world behaviors while extending the possibilities of community and experiences through the flexibility and extensibility of virtual spaces.”
¶ 103: “In some embodiments, the core sub-platform 120 may allow users/businesses to build a digital visual version of a real or imagined place (e.g., a place business or corporate HQ, or other physical place and/or structure, real or imagined)”
The amendment is not sufficiently described, for the noun “place” as used in the disclosure is not the same as “space”, wherein the only portions of the disclosure that recite the act of “identifying” are referring to “place”, not space. The Examiner suggests, in view of ¶ 83, cancelling the term “identifying” as ¶ 83 discloses: “For example, a user may create a space within the 3D simulated space to visit and congregate with others such the avatars of the other users (may be visible or non-visible to other users), request spaces to be created to increase user experiences”.
obtaining, by the at least one processor, at least one digital layout representing at least one layout of at least one area of the space; wherein the at least one digital layout comprises: at least one simulated space layout corresponding to a simulated space associated with the space to be simulated in the at least one virtual environment, and at least one simulated area layout associated with at least one simulated area to simulate the at least one area of the space; wherein the at least one digital layout comprises structural programming defining, for the space, a plurality of relational mappings defining a plurality of spatial relationships between each of:
Similar as above, see ¶ 5: “obtaining, by the at least one processor, at least one digital layout representing at least one layout of at least one area of a planned virtual space… mapping, by the at least one processor, at least one area defined within the at least one digital layout to at least one simulated area in the virtual space based at least in part on a virtual space 3D coordinate system so as to create a simulated space including at least one simulated layout corresponding to the at least one area of the planned space”
Again, not sufficiently described. The instant specification is clear that the “digital layout” is not the “simulated layout” by expressing using them as separate elements in the original claims and disclosure in the same paragraph. The claims furthermore later make clear these are distinct elements, i.e. first a virtual space is received, and then simulations are done with that space.
To further clarify, the specification is exceeding precise at some points (e.g. the original claims) is replete usage of certain modifiers on particular nouns to point out distinct elements. The specification does not convey these are interchangeable, and thus does not provide sufficient written description support for changing one distinct element for another in the same combination.
mapping, by the at least one processor, the at least one area defined within the at least one digital layout to the at least one simulated area in the virtual space based at least in part on a virtual space 3D coordinate system and the structural programming defining the plurality of relational mappings so as to create the simulated space comprising the at least one simulated space layout corresponding to the space and at least one simulated area layout corresponding to the at least one area of the space;
Similar reasons as above, i.e. simulated layout is a distinct element from digital layout in the instant disclosure, and the present amendment indicates an attempt to use these interchangeably. To clarify, the present claims are now requiring that the digital layout comprises the simulated layouts, in contrast to the specification which describes that the simulated layout is created by the mapping step along with the simulated space.
Claims 6 and 16 (using claim 6 as representative) recite:
The method of claim 2, further comprising:
determining, by the at least one processor, at least one graphical layer associated with the at least one digital object in the at least on the simulated space;
positioning, on at least one display of the at least one computing device, by the at least one processor, the at least one graphical layer to align the simulated space with the at least one area corresponding to the space based at least in part on the user;
and rendering, by the at least one processor, the at least one graphical layer so as to superimpose the at least one digital object within the simulated space.
See ¶¶ 10 and 20, which recite the original claims, e.g. “positioning, on at least one display of the at least one computing device, by the at least one processor, the at least one graphical layer to align the at least one simulated space with the at least one space based at least in part on the view orientation”.
At issue is that there is insufficient written description to support this particular combination now claims as the instant disclosure does not provide sufficient independent basis so as to clearly link this claim in the manner now recited, e.g. the claim now recites “to align the simulated space with the at least one area corresponding to the space” limitation, but ¶¶ 10 and 20 merely describe that this is “the at least one space of the place” wherein previously the instant disclosure sets forth numerous spaces, e.g. “a planned virtual space”, “a simulated space”, etc. (e.g. ¶¶ 5 and 15). Also, see the originally filed claims which had similar such deficiencies (as was noted in the original non-final rejection, starting on page 11, wherein the Examiner had previously considered ¶¶ 10 and 20 to resolve the prior § 112(b) issues and noted that the instant disclosure did not remedy the deficiencies in the original claims).
In summary, at issue is that a claim that was indefinite because both the original claim and the specification showed there were multiple ways to interpret the claim scope (specifically, what “the space” in ¶ 10 is actually referring back to in the disclosure and the disclosure was and is indefinite) and any amendment to address this issue of indefiniteness is unsupported by the disclosure. Hence, the Examiner’s original suggestion in the original non-final rejection to cancel this claim (and in contrast, if the specification actually provided sufficient support, the Examiner would have suggested a supported amendment, e.g. see the rejection of dependent claim 7 on page 14 of the non-final rejection on Nov. 2024).
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Where applicant acts as his or her own lexicographer to specifically define a term of a claim contrary to its ordinary meaning, the written description must clearly redefine the claim term and set forth the uncommon definition so as to put one reasonably skilled in the art on notice that the applicant intended to so redefine that claim term. Process Control Corp. v. HydReclaim Corp., 190 F.3d 1350, 1357, 52 USPQ2d 1029, 1033 (Fed. Cir. 1999). The term “and” in claims 1-20 is used by the claims to mean “or,” while the accepted meaning is “and”. The term is indefinite because the specification does not clearly redefine the term.
See MPEP § 2173.05(a)(II): “The requirements for clarity and precision must be balanced with the limitations of the language and the science. If the claims, read in light of the specification, reasonably apprise those skilled in the art both of the utilization and scope of the invention, and if the language is as precise as the subject matter permits, the statute (35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph) demands no more. Packard, 751 F.3d at 1313, 110 USPQ2d at 1789 ("[H]ow much clarity is required necessarily invokes some standard of reasonable precision in the use of language in the context of the circumstances."). This does not mean that the examiner must accept the best effort of applicant. If the language is not considered as precise as the subject matter permits, the examiner should provide reasons to support the conclusion of indefiniteness and is encouraged to suggest alternatives that would not be subject to rejection.”
See ¶ 37: “As used herein, the terms "and" and "or" may be used interchangeably to refer to a set of items in both the conjunctive and disjunctive in order to encompass the full description of combinations and alternatives of the items. By way of example, a set of items may be listed with the disjunctive "or", or with the conjunction "and." In either case, the set is to be interpreted as meaning each of the items singularly as alternatives, as well as any combination of the listed items”
To clarify on this definition, a claim to a method comprising steps A, B, and C has the exact same scope as a claim to a method comprising steps A, B, or C (i.e. a method in which only one step is required).
At issue is that ¶ 37 does not provide sufficient clear public notice of this special definition, because the terms “and” and “or” are not technological terms of art, but rather ordinary simple English words with well-accepted meanings known to all English speakers. In other words, no reasonable POSITA would read the term “and” to be “or”, rather they would look for the claims and the disclose to explicitly recite “or” instead of the term “and”. MPEP § 2111.01(I): “Chef America, Inc. v. Lamb-Weston, Inc., 358 F.3d 1371, 1372, 69 USPQ2d 1857 (Fed. Cir. 2004) (Ordinary, simple English words whose meaning is clear and unquestionable, absent any indication that their use in a particular context changes their meaning, are construed to mean exactly what they say.”)
To further clarify, MPEP § 2173.05(a)(III): “See, e.g., Process Control Corp. v. HydReclaim Corp., 190 F.3d 1350, 1357, 52 USPQ2d 1029, 1033 (Fed. Cir. 1999) ("While we have held many times that a patentee can act as his own lexicographer to specifically define terms of a claim contrary to their ordinary meaning," in such a situation the written description must clearly redefine a claim term "so as to put a reasonable competitor or one reasonably skilled in the art on notice that the patentee intended to so redefine that claim term."); Hormone Research Foundation Inc. v. Genentech Inc., 904 F.2d 1558, 15 USPQ2d 1039 (Fed. Cir. 1990).” – to put POSITA on clear notice that “or” is to be used instead of “and” in a claim, POSITA would reasonably expect the claim to recite “or” explicitly. POSITA would expect such clear notice to be explicit in the present claims because these terms are ordinary, simple, English words.
For purposes of Examination, the Examiner uses the well accepted meanings of these ordinary simple English terms, and suggests the Applicant delete ¶ 37.
To clarify on how “or” is construed, the Examiner interprets it as an inclusive “or”, i.e. “A, B, or C” conveys “A, B, or C, or any combination thereof”. Should it be intended to be exclusive, the Examiner suggests other clarifying language, e.g. “…one of the following: A, B, or C”.
As a final point, should this rejection be overturned or withdrawn, then the Examiner’s claim construction is forced by the special definition in the specification of “and” to read every “and” in the claims as an “or”, i.e. claim 1 would only require the “receiving…” step, and so on, e.g. claim 4 is an “or” on the steps, etc.
Therefore, under such a construction, the independent claims would be anticipated under § 102 by any refence that merely taught this first limitation in the independent claims, along with the preamble, regardless of any later limitations in the claim. Any limitation that recited an “and” in the limitation would also be construed as “or”, e.g. determining element A and B would be construed as “A or B” under the BRI consistent with the special definition.
In other words, the claim construction by the Examiner (reading “and” as solely “and” and giving no weight to the special definition of this word) is merely to avoid having to construe the claim as being akin to a claim for a charcoal briquette (Chef America, Inc. v. Lamb-Weston, Inc., 358 F.3d 1371, 1372, 69 USPQ2d 1857 (Fed. Cir. 2004): “The problem is that if the batter-coated dough is heated to a temperature range of 400° F. to 850° F., as the claim instructs, it would be burned to a crisp. Instead of the “dough products suitable for freezing and finish cooking to a light, flaky, crispy texture,” ′290 patent, col. 2, ll. 11–12, which the patented process is intended to provide, the resultant product of such heating will be something that, in the words of one of the attorneys in this case, resembles a charcoal briquet… This court, however, repeatedly and consistently has recognized that courts may not redraft claims, whether to make them operable or to sustain their validity… Even “a nonsensical result does not require the court to redraft the claims of the [’290] patent… B. Although “a patentee can act as his own lexicographer to specifically define terms of a claim contrary to their ordinary meaning,” id. (citing Digital Biometrics v. Identix, Inc., 149 F.3d 1335, 1344 (Fed.Cir.1998)), we discern nothing in the claims, the specification, or the prosecution history that indicates that the patentees here defined “to” to mean “at.””).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of both a mental process and certain methods of organizing human activity without significantly more.
Step 1
Claim 1 is directed towards the statutory category of a process.
Claim 11 is directed towards the statutory category of an apparatus.
Claims 11, and the dependents thereof, are rejected under a similar rationale as representative claim 1, and the dependents thereof.
Claim interpretation
The claims are given their broadest reasonable interpretation by a person of ordinary skill in the art. See Phillips v. AWH Corp., 415 F.3d 1303, 1316, 75 USPQ2d 1321, 1329 (Fed. Cir. 2005) as discussed in MPEP § 2111; also see in MPEP § 2111: “Because applicant has the opportunity to amend the claims during prosecution, giving a claim its broadest reasonable interpretation will reduce the possibility that the claim, once issued, will be interpreted more broadly than is justified. In re Yamamoto, 740 F.2d 1569, 1571 (Fed. Cir. 1984); In re Zletz, 893 F.2d 319, 321, 13 USPQ2d 1320, 1322 (Fed. Cir. 1989) ("During patent examination the pending claims must be interpreted as broadly as their terms reasonably allow.”) … Further, the broadest reasonable interpretation of the claims must be consistent with the interpretation that those skilled in the art would reach.”
MPEP § 2111.01(I): “Under a broadest reasonable interpretation (BRI), words of the claim must be given their plain meaning, unless such meaning is inconsistent with the specification. The plain meaning of a term means the ordinary and customary meaning given to the term by those of ordinary skill in the art at the relevant time. The ordinary and customary meaning of a term may be evidenced by a variety of sources, including the words of the claims themselves, the specification, drawings, and prior art. However, the best source for determining the meaning of a claim term is the specification - the greatest clarity is obtained when the specification serves as a glossary for the claim terms. Phillips v. AWH Corp., 415 F.3d 1303, 1315, 75 USPQ2d 1321, 1327 (Fed. Cir. 2005) (en banc) ("[T]he specification ‘is always highly relevant to the claim construction analysis. Usually, it is dispositive; it is the single best guide to the meaning of a disputed term.’" (quoting Vitronics Corp. v. Conceptronic Inc., 90 F.3d 1576, 1582 (Fed. Cir. 1996)).”
MPEP § 2111.01(III): “"[T]he ordinary and customary meaning of a claim term is the meaning that the term would have to a person of ordinary skill in the art in question at the time of the invention, i.e., as of the effective filing date of the patent application." Phillips v. AWH Corp.,415 F.3d 1303, 1313, 75 USPQ2d 1321, 1326 (Fed. Cir. 2005) (en banc); Sunrace Roots Enter. Co. v. SRAM Corp., 336 F.3d 1298, 1302, 67 USPQ2d 1438, 1441 (Fed. Cir. 2003); Brookhill-Wilk 1, LLC v. Intuitive Surgical, Inc., 334 F.3d 1294, 1298, 67 USPQ2d 1132, 1136 (Fed. Cir. 2003) ("In the absence of an express intent to impart a novel meaning to the claim terms, the words are presumed to take on the ordinary and customary meanings attributed to them by those of ordinary skill in the art.")… Phillips v. AWH Corp., 415 F.3d 1303, 1317, 75 USPQ2d 1321, 1329 (Fed. Cir. 2005) ("Although we have emphasized the importance of intrinsic evidence in claim construction, we have also authorized district courts to rely on extrinsic evidence, which "consists of all evidence external to the patent and prosecution history, including expert and inventor testimony, dictionaries, and learned treatises.")… Any meaning of a claim term taken from the prior art must be consistent with the use of the claim term in the specification and drawings. Moreover, when the specification is clear about the scope and content of a claim term, there is no need to turn to extrinsic evidence for claim interpretation.”
As part of properly determining the broadest reasonable interpretation, one must first determine who is a person of ordinary skill in the art. MPEP § 2141.03(I): “The person of ordinary skill in the art is a hypothetical person who is presumed to have known the relevant art at the relevant time. Factors that may be considered in determining the level of ordinary skill in the art may include: (A) "type of problems encountered in the art;" (B) "prior art solutions to those problems;" (C) "rapidity with which innovations are made;" (D) "sophistication of the technology; and" (E) "educational level of active workers in the field. In a given case, every factor may not be present, and one or more factors may predominate." In re GPAC, 57 F.3d 1573, 1579, 35 USPQ2d 1116, 1121 (Fed. Cir. 1995); Custom Accessories, Inc. v. Jeffrey-Allan Indus., Inc., 807 F.2d 955, 962, 1 USPQ2d 1196, 1201 (Fed. Cir. 1986); Environmental Designs, Ltd. V. Union Oil Co., 713 F.2d 693, 696, 218 USPQ 865, 868 (Fed. Cir. 1983)…. "A person of ordinary skill in the art is also a person of ordinary creativity, not an automaton." KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398, 421, 82 USPQ2d 1385, 1397 (2007)… The level of disclosure in the specification of the application under examination or in relevant references may also be informative of the knowledge and skills of a person of ordinary skill in the art. For example, if the specification is entirely silent on how a certain step or function is achieved, that silence may suggest that figuring out how to achieve that step or function is within the ordinary skill in the art, provided that the specification complies with 35 U.S.C. 112. References which are not prior art may be relied upon to demonstrate the level of ordinary skill in the art at or around the relevant time. See In re Merck & Co., Inc., 800 F.2d 1091, 1098, 231 USPQ 375, 380 (Fed. Cir. 1986) ("Evidence of contemporaneous invention is probative of ‘the level of knowledge in the art at the time the invention was made.’"…”
MPEP § 2143.01(II): “If the only facts of record pertaining to the level of skill in the art are found within the prior art of record, the court has held that an invention may be held to have been obvious without a specific finding of a particular level of skill where the prior art itself reflects an appropriate level. Chore-Time Equipment, Inc. v. Cumberland Corp., 713 F.2d 774, 218 USPQ 673 (Fed. Cir. 1983). See also Okajima v. Bourdeau, 261 F.3d 1350, 1355, 59 USPQ2d 1795, 1797 (Fed. Cir. 2001).”
Should further clarification be sought on this level of skill determination during Examination, see (informative) Ex parte Jud, PTAB Appeal No. 2006-1061, available here: https://www.uspto.gov/patents/ptab/precedential-informative-decisions
The Examiner finds the following evidence informative at the level of skill determination for claim interpretation:
Tombers, Paul A. "Teaching Advanced CAD Concepts." International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. Vol. 6234. American Society of Mechanical Engineers, 1991. Page 300, col. 2, ¶¶ 2-4: “One of the most fundamental concepts the advanced CAD student must understand is how to use. the CAD system's coordinate systems to generate a 100% accurate three dimensional model. Most CAD systems capable of generating three dimensional models use two types of coordinate systems, a global coordinate system and a local coordinate system…. The global coordinate system usually is a fixed, right-handed Cartesian coordinate system. For any given point on the model, the point will have a unique x, y, z coordinate that defines its position relative to the origin of the global coordinate system… A local coordinate system is a coordinate system that the CAD student can specifically define to help generate an entity or plane that would be difficult or impossible to define in the global coordinate system. The origin of the local coordinate system can be located anywhere in the global coordinate system and the
axes, x, y, and z, may be turned and tilted in any direction…”
Rusinkiewicz, “Scene Graphs &Modeling Transformations”, COS 426 Lecture Notes, Spring 2011, URL: www(dot)cs(dot)princeton(dot)edu/courses/archive/spr11/cos426/outline(dot)html – see the figure on slide 7 which shows such a hierarchical coordinate system; slide 8: “Allows definitions of objects in own coordinate systems”; see the figure on slide 14 including the parts annotated “Modeling Coordinates” and “World Coordinates”
Marschner, “3D Viewing”, CS 4620 Lecture 11 Slides, Cornell University, 2019, URL: www(dot)cs(dot)cornell(dot)edu/courses/cs4620/2019fa/ - see slide 5 which shows the use of hierarchical coordinate systems and the transformations between them as part the “Pipeline” in 3D rendering; see the remaining slides to further clarify, including slide 16 for : “Remember that geometry would originally have been in the object’s local coordinates; transform into world coordinates is called the modeling matrix”
Buss, “3D Computer Graphics – A mathematical Introduction with OpenGL”, May 2019, URL: www(dot)math(dot)ucsd(dot)edu/~sbuss/CourseWeb/Math155A_2019Winter/SecondEdDraft(dot)pdf – see page 32: “…We start with a vertex position x, usually as a point in 3-space, denoting a position for the vertex in local coordinates. For example, in the Ferris wheel example, to model a single chair of the Ferris wheel it might be the most convenient to model it as being centered at the origin. Vertices are specified relative to a “local coordinate system” for the chair. In this way, the chair can be modeled once without having to consider where the chairs will be placed in the final scene…. The Model matrix M transforms vertex positions x from local coordinates into world coordinates (also called global coordinates). The world coordinates can be viewed as the “real” coordinates. In the Ferris wheel example, multiple chairs are rendered at different positions and orientations on the Ferris wheel, since the chairs go around with the wheel plus they may rock back-and-forth. Each chair will have its own Model matrix M. The Model matrix transforms vertex positions x from local coordinates into global coordinates…” – and see the remaining portions of pages 32-34; then see figure II.1 on page 34.
Blender 3D Noob to Pro, Chapter: Coordinate Spaces in Blender, last edited on 2 June 2018, URL: en(dot)wikibooks(dot)org/wiki/Blender_3D:_Noob_to_Pro/Coordinate_Spaces_in_Blender – see the subsection on “Global and Local Coordinates”: “Blender refers to the coordinate system described above as the global coordinate system, though it's not truly global as each scene has its own global coordinate system. Each global coordinate system has a fixed origin and a fixed orientation, but we can view it from different angles by moving a virtual camera through the scene and/or rotating the camera. Global coordinates are adequate for scenes containing a single fixed object and scenes in which each object is merely a single point in the scene. When dealing with objects that move around (or multiple objects with sizes and shapes), it's helpful to define a local coordinate system for each object, i.e. a coordinate system that can move with, and follow the object. The origin of an object's local coordinate system is often called the center of the object although it needn't coincide with the geometrical center of the object. 3D objects in Blender are largely described using vertices (points in the object, singular form: vertex). The global coordinates of a vertex depend on: the (x, y, z) coordinates of the vertex in the object's local coordinate system the location of the object's center any rotation (turning) of the local coordinates system relative to the global coordinate system, and any scaling (magnification or reduction) of the local coordinate system relative to the global coordinate system.”
Step 2A – Prong 1
The claims recite an abstract idea of a mental process. See MPEP § 2106.04(a)(2).
The mental process recited in claim 1 is:
… wherein the at least one digital layout comprises structural programming defining, for the space, a plurality of relational mappings defining a plurality of spatial relationships between each of: the at least one virtual environment, the simulated space within the at least one virtual environment, and the at least one simulated area of the simulated space;…mapping, by the at least one processor, the at least one area defined within the at least one digital layout to the at least one simulated area in the virtual space based at least in part on a virtual space 3D coordinate system and the structural programming defining the plurality of relational mappings so as to create the simulated space comprising the at least one simulated space layout corresponding to the space and at least one simulated area layout corresponding to the at least one area of the space; wherein the virtual space 3D coordinate system is mapped to at least one position in the at least one virtual environment 3D coordinate system to locate the simulated space in the at least one virtual environment;
A mental process, but for the mere instructions to do it on a computer/ in a computer environment.
A person, e.g. an interior designer or an architect, is readily able to mentally observe a collection of information about a physical space (e.g. observing blueprints representing a layout, or various alternative layouts, of the physical space, and observing the physical space to be redesigned/planned with their own eyes), and then mentally observe/evaluate this information, e.g. by mentally visualizing the layout imposed upon a mental visualization of the physical space.
For example, suppose a person wants to move the furniture around in a room – they would readily be able to sketch a variety of placements of the furniture by performing a series of mental judgements/evaluations/opinions, and then mentally observe these sketches and the room itself, and mentally visualize how each placement would look (e.g. visualizing how a couch would look in a different corner of the room) as part of mentally evaluating/judging which placement option is the best.
With respect to the 3D coordinate system, this would readily be part of the mental process, e.g. the person using physical aids, e.g. a tape measure, to measure the dimensions of the couch and other dimensions in the room (e.g. the height of a half-wall separating the room from another room), and then mentally visualizing the couch placed against the half-wall so as to evaluate its placement, including the distance between the top of the couch and the top of the half-wall.
A person would also readily be able to use a computer as a tool to perform this step, e.g. having the sketches of the various placements, and then manipulating a 3D CAD model on the computer to reflect the various placements/other parts of the layout. The claim does not recite with any particularity how this “mapping…” is performed in a technological manner that would require a particular technological implementation of this step, i.e. “where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016);” as discussed in MPEP § 2106.04(a)(2)(III)(A); and “Intellectual Ventures I v. Capital One Fin. Corp., 850 F.3d 1332, 121 USPQ2d 1940 (Fed. Cir. 2017)…Although the claims purported to modify the underlying XML document in response to modifications made in the dynamic document, nothing in the claims indicated what specific steps were undertaken other than merely using the abstract idea in the context of XML documents. The court thus held the claims ineligible, because the additional limitations provided only a result-oriented solution and lacked details as to how the computer performed the modifications, which was equivalent to the words "apply it". 850 F.3d at 1341-42; 121 USPQ2d at 1947-48 (citing Electric Power Group., 830 F.3d at 1356, 1356, USPQ2d at 1743-44 (cautioning against claims "so result focused, so functional, as to effectively cover any solution to an identified problem"))” as discussed in MPEP § 2106.05(f) – and, as expressly recited in the claim, structural programming is “a computer programming language” which is lacking any particular detail.
As to the relational mappings, these are similar a mental process. See ¶¶ 110-113, i.e. this is simply the abstract idea of using two separate coordinate systems, one global, and another local.
Such a simple mental process is a foundational part of cartography, land surveying, property rights, etc.
To clarify, the planet earth has a coordinate system long-used in mental processes, i.e. latitude and longitude, such as to circumnavigate the planet, or sail across the Atlantic Ocean.
Latitude and longitude are not grid lines on a flat sheet of paper, but rather grid lines in 3D spherical space.
Cartographers have long used mapping techniques such as the Mercator projection (long pre-dating computers and the US constitution) to map 3D spherical objects, i.e. the Earth, to a map, hence creating the well-known size distortions in commonplace maps at the poles.
Next, cites also have maps, wherein city maps might be by latitude and longitude, or they might be by a local reference point, e.g. the 1st boundary stone of Washington DC marking its outer boards, e.g. such as in the maps of Washington DC that were created when planning out the city. Or New York.
Or they might have both, e.g. street addresses, when streets follow a common set pattern, e.g. in Washington DC, the US capitol forms the origin of the coordinate system of the streets, with the street names taking numbered values for North to South Streets, wherein each location in the city is readily identifiable by street address as well as latitude and longitude.
Property boundary lines are another common example, e.g. in dividing a region of land to form parcels, then sub-dividing, etc., the property boundary lines are generally not in latitude and longitude (e.g. GPS didn’t exist for the creation of many property boundaries) but rather have their own local coordinate system, thus they have a common reference point, or reference points, in each area, referred to as a survey marker (or other terms such as benchmarks, survey monuments, geodetic markers, etc.), wherein the survey markers are placed on latitude and longitude.
By doing such a hierarchical coordinate system (i.e. a lower level one for plots of land, and a global coordinate system) wherein the lower level one is referenced to the higher level one by the survey marker coordinates. Thus, one can relationally map from one system to another, e.g. a corner of a plot of land may readily be mapped back to latitude and longitude; but a surveyor, or at least one 100 years ago, or longer (e.g. George Washington), would have found the location by reference to the marker itself (e.g. using measurement tools to get to a certain distance in a compass direction from the marker, that point being the point in the local coordinate system with the origin at the marker, and distances specified by that origin).
To do it for 3D coordinate systems is no less abstract, i.e. just add altitude/elevation. One could readily do a local elevation axis as well, e.g. instead of referencing to sea level, reference to the elevation at the survey marker (e.g. in the times before adequate means of precising measuring elevation in a widely available manner).
To add more levels is similar no less abstract, e.g. now the plot boundaries have been laid out on the local system by a surveyor such as George Washington (a noted surveyor, among other things), and the architect arrives (e.g. Thomas Jefferson, a noted architect, among other things), and the architect realizes quickly that it would be impossible (or just difficult) in the 1800s to design a building on the grid system of the city map due to how coarse it is for the property.
So they decide instead to create their own grid system, using the survey markers at the corners/edges of the lot as the reference points/limits of their grid system, and set out another coordinate system so as to create the blue prints of the building. In doing so, they may readily have used mentally multiple grid systems, e.g. for a complex building, e.g. Montacello, they may well have created another grid system for a lower level of detail, e.g. for a particular room that called for elaborate frescos, columns, etc., wherein that room received its own coordinate system referenced/relationally mapped to the higher levels by its boundaries.
The resizing feature is merely a result of such mappings, i.e. if you define the boundaries of a building in latitude and longitude, then a person, e.g. a surveyor, would readily be able to mentally use the hierarchical coordinate systems of the property maps (with the reference markers) to map the dimensions (and thereby resize the building to a second coordinate system) to the local map, similar as an architect would zoom in further to their own coordinate system in their mental process (e.g. in drawing blueprints).
Or, for another example, see United States mailing addresses, e.g. the USPTO headquarters is located at 600 Dulany St., Alexandria, VA, 22314. The “600 Dulany St.” identifies the location the lowest level coordinate system of the street coordinates. “Alexandria, VA”, identifies the next two levels – city and state; 22314 identifies a separate coordinate system (but higher level than the street address) of the ZIP code grid system (i.e. the US is subdivided into a plurality of areas, each area of which gets its own ZIP code, and several areas have a two-level ZIP code, as the higher level ZIP code region, e.g. 22314, is subdivided into smaller regions).
In other words, it’s an abstract idea – and the claim merely recites the use of this abstract idea used, but do it on a computer. E.g. ¶ 40 discusses how the sizing at each level of the hierarchy may be different – see maps, which use a hierarchical coordinate system for this exact purpose, e.g. a map of the United States ZIP codes is not going to show 600 Dulany St.; but it will show the regions that each ZIP code covers.
The next level down in the map hierarchy (or a few levels more likely, e.g. a hierarchy of world map -> US map -> state level map -> city level map (and, if a larger city, e.g. NYC, potentially more levels)) would scale up/resize, e.g. a city streets map of Alexandria VA would readily be expected to show 600 Dulany St.
To clarify on the legal and historical usage of hierarchical coordinate systems (the Examiner noting specifically MPEP § 2106.04(a)(2)(II)(A): “Bilski v. Kappos, 561 U.S. 593, 611, 95 USPQ2d 1001, 1010 (2010) (claims to the concept of hedging are a "fundamental economic practice long prevalent in our system of commerce and taught in any introductory finance class.”), see:
First, the legal: as codified commonly into state law in many states, predating computers as well, see Va. Code §§ 1-603 to 1-605: “Virginia Coordinate Systems”: “The plane coordinates of a point on the earth's surface, to be used in expressing the position or location of such point in the appropriate zone of these systems, shall be expressed in U.S. survey feet and decimals of a foot. One of these distances, to be known as the "x-coordinate," shall give the position in an east-and-west direction; the other, to be known as the "y-coordinate," shall give the position in a north-and-south direction. These coordinates shall be made to depend upon and conform to the coordinate values for the monumented points of the North American Horizontal Geodetic Control Network as published by the National Ocean Service/National Geodetic Survey, or its successors, and whose plane coordinates have been computed on the systems defined in this chapter. Any such station may be used for establishing a survey connection to either Virginia coordinate system. When converting coordinates in the Virginia Coordinate System of 1983 from meters and decimals of a meter to feet and decimals of a foot, the U.S. survey foot conversion factor (one foot equals 1200/3937 meters) shall be used. This requirement does not preclude the continued use of the International foot conversion factor (one foot equals 0.3048 meters) in those counties and cities where this factor was in use prior to July 1, 1992. The plat or plan shall contain a statement of the conversion factor used and the coordinate values of a minimum of two project points in feet. 1946, p. 167; Michie Suppl. 1946, § 2849(3); Code 1950, § 55-290; 1984, c. 726; 1992, c. 1; 2019, c. 712…Tract of land lying in both coordinate zones. When any tract of land to be defined by a single description extends from one into the other of the two coordinate zones established in this chapter, the positions of all points on its boundaries may be referred to either of the two zones, with the zone that is used being specifically named in the description…The Virginia Coordinate System of 1927, North Zone, is a Lambert conformal projection of the Clarke spheroid of 1896, having standard parallels at north latitudes 38° 02' and 39° 12', along which parallels the scale shall be exact. The origin of coordinates is at the intersection of the meridian 78° 30' west of Greenwich with the parallel 37° 40' north latitude, such origin being given the coordinates: x = 2,000,000' and y = 0'. The Virginia Coordinate System of 1927, South Zone, is a Lambert conformal projection of the Clarke spheroid of 1896, having standard parallels at north latitudes 36° 46' and 37° 58', along which parallels the scale shall be exact. The origin of coordinates is at the intersection of the meridian 78° 30' west of Greenwich with the parallel 36° 20' north latitude, such origin being given the coordinates: x = 2,000,000' and y = 0'.” 1946, p. 167; Michie Suppl. 1946, § 2849(4); Code 1950, § 55-291; 2019, c. 712.”
As to the history, and Thomas Jefferson’s involvement in creating the early forms of such laws as above, see White, C. Albert. A history of the rectangular survey system. US Department of the Interior, Bureau of Land Management, 1991. URL: www(dot)blm(dot)gov/sites/default/files/histrect(dot)pdf
White, page 11: “In 1784, a committee headed by Jefferson drafted an ordinance which called for prior survey of tracts ten geographical miles square, which were called hundreds; they would be subdivided into lots one mile square. The lines would run due north and south, east and west and settlement would be by hundreds or by lots. This plan did not call for reservations for schools or churches. It is generally believed that Jefferson drafted the original ordinance. This draft was debated at length and was then referred to a committee composed of one man from each State. Jefferson was in Europe and Grayson from Virginia was named to replace him. This new committee made some alterations; they reduced the tract size to a seven-(statute) mile-square township with 49 lots. One lot in each township was reserved for schools, one lot for religious purposes and four lots to Congress for future disposal. One third of any gold, silver, lead, or copper which might be found was also reserved. The townships would be sold whole at auction for a minimum of $1 per acre, minus the reservations of six lots” and see the text of the Land Ordinance act of 1785 starting on page 11
To clarify that such survey systems used hierarchical grids for mapping the metes and bounds of properties, see fig. 8-10 as discussed on page 24: “The lands were to be surveyed into townships and lots in accordance with the Land Ordinance; the costs of the survey, etc., were borne by the company. The east boundary of the purchase was the west boundary of the Seven Ranges; the new surveys were to be an extension of the Seven Ranges surveys. Lot 16 in each township was reserved for schools, Lots 8,11, and 26 were reserved to Congress, and Lot 29 was reserved for the support of religion (see Fig. 8)… The Ohio Company had over 800 subscribers or stockholders. Using a complicated formula derived by the company, each stockholder was entitled to receive a total of 1,173.37 acres of land in 7 different sizes of tracts. The sixth and seventh tracts were for 640 acres and 262 acres, respectively. To meet the odd acreage of 262 acres, a given township would be divided according to the plan shown in Fig. 9. By that division, 22 shareholders would receive their 640-acre and 262-acre allotments in the township…Approximately 37 townships were surveyed in this manner. Other townships were subdivided into 160-acre quarter sections and the remainder into small tracts and town lots…”
To give another clarifying example of the use of hierarchical girds for mapping well before the invention of the computer, see fig. 29, and note that # 11 is the “Symmes Purchase” and # 16 is the “Ohio Company Purchase” wherein fig. 29 is a higher-level grid in a hierarchy - and fig. 9 is a “A Typical Township Subdivision in the Ohio Company Purchase”. In addition, note how the renderings in these drawings change at each level in the hierarchy.
Hierarchical coordinate systems are a fundamental element of the US legal and commerce systems (and many others) for determining property boundaries and the like, long practiced mentally by countless people before the invention of the computer system, and therefore not eligible subject matter for a claim to be directed to. It is so foundational in law that to this day it remains codified in law, and has been codified since the days and efforts of Thomas Jefferson.
Should further clarification be sought, the Examiner also notes that the well-known 3D cartesian coordinate system (x, y, z; as many are familiar with) long predates the computer
Bernstein, Mike. "Modularity in creating 3D game environments." (2017). Bachelor’s Thesis, Tampere University of Applied Sciences. § 3.1: “Since the invention of the Cartesian coordinate system in the 17th century, we have been able to represent a point in theoretical space with the use of three coordinates (New World Encyclopedia 2018). Naturally, as the internal language of computers is based on mathematics, it follows that the earliest games and game engines used the same coordinate system to determine where an object was located in virtual space (picture 15). Further, the idea of using these coordinates to place objects with mathematical precision is not new. The ability to precisely arrange assets according to a grid or other standardization is generally considered obvious. Ensuring individual assets fit within standard grid units is a foundation of modularity in games (Perry 2002, 32).” Then see § 3.1.1: “To begin planning for a coordinate-based modular environment, designers must first determine the basic units by which their coordinate plan will be divided. Typically, a number is chosen because it is easily divisible into smaller units, facilitating the creation of assets of various sizes larger and smaller than the default that still fit within the grid… Certain early 3D games used coordinate-based snapping to procedurally generate huge amounts of content…” then § 3.1.2: “Modern 3D modelling software intrinsically operates on a Cartesian coordinate system, easily facilitating the creation of grid-bound assets….”
To further clarify, see Modjeska, David K. Hierarchical data visualization in desktop virtual reality. Diss. 2000. University of Toronto. § 2.4.2: “During physical and information exploration, mental or cognitive maps are essential for human storage and use of environmental information… Research by Stevens & Coupe (1973) provided evidence of hierarchical representations of environmental knowledge. Their experiment showed that human directional judgments about geographic locations can be substantially distorted by large, surrounding region… In all cases, people consistently and significantly biased directional judgments towards the containing geographic region. These errors appear to arise from hierarchical mental representations of spatial information. The errors occur because spatial relations are often inferred from other facts. In the experiment, relative directions between cities were inferred from relative directions between States and counties… Further evidence for hierarchical representations of spatial knowledge was provided by Chase (1983). His study of taxi drivers in Pittsburgh reported several distortions in spatial judgements… These results argue against so-called "maps in the head"; rather, they indicate that largescale environmental representations have a hierarchical organization. Expertise seems to consist of an expanded knowledge of neighborhoods, streets, and environmental perceptual cues… In this context, a cognitive map represents a functional analogue of a cartographic map. This view is compatible with the propositional view of mental map representation…”
wherein structural programming comprises a computer programming language that is configured to independently and dynamically resize at least one of the simulated space and the at least one simulated area independently of-the at least one virtual environment 3D coordinate system in response to at least one user interaction with the simulated space;… and execute, responsive the at least one user interaction, the structural programming to dynamically adjust at least one of a size, a position, or shape of at least one of the simulated space or the at least one simulated area. a mental process, given the high level of generality recited, but for the mere instructions to use a computer to do this. E.g. see the discussion of hierarchical coordinate systems above, wherein the purpose of them is resizing. E.g. on a world map, the city of Alexandria, VA [example of an area in a space of the world] is but a dot, but on state maps it will be increased in size as a result of using a more local coordinate system, and further increased in size at the next layer of the city map.
With respect to the term “dynamically”, see the discussion of “real-time” in Electric Power Group as noted in MPEP § 2106.04(a)(2)(III)(D).
determining, by the at least one processor, at least one activity associated with the at least one simulated area; - a mental judgement/evaluation, but for the mere instructions to do it on a computer.
determining, by the at least one processor, at least one external software integration providing at least one external software function associated with performing the at least one activity; - a mental judgement/evaluation, but for the mere instructions to do it on a computer and with generic computer components. See ¶ 17 to clarify on the BRI of external integration, e.g. “a cash register that integrates at least one payment service into the virtual space,” – e.g. a person mentally judging that the activity is to be selling products, and mentally judges to put a cash register into the area so as to enable selling of products, but for the mere instructions to do it on a computer and with generic computer components
generating, by the at least one processor, at least one digital object in the at least one simulated area corresponding to the at least one external integration, the at least one digital object comprising at least one computer instruction configured to integrate the at least one external software function into the at least one simulated area;- a mental process, but for the mere instructions to do it on a computer. For example, ¶ 17: “In some aspects, the techniques described herein relate to a method, wherein the at least
one digital object corresponding to the at least one external integration includes at least one of:
a cash register that integrates at least one payment service into the virtual space” – e.g. the person mentally judged/evaluated that the area is planned to become a store, e.g. a pet store, and so they judge that a cash register is required for the store to be a store, and thus they observe the space and mentally evaluate where to put the cash register, e.g. mentally visualizing the cash register goes on top of a countertop in the space, thus providing the addition function of allowing sales activity because there is a cash register.
Under the broadest reasonable interpretation, these limitations are process steps that cover mental processes including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper but for the recitation of a generic computer component. If a claim, under its broadest reasonable interpretation, covers a mental process but for the recitation of generic computer components, then it falls within the "Mental Process" grouping of abstract ideas. A person would readily be able to perform this process either mentally or with the assistance of pen and paper. See MPEP § 2106.04(a)(2).
To clarify, see the USPTO 101 training examples, available at https://www.uspto.gov/patents/laws/examination-policy/subject-matter-eligibility. In particular, with respect to the physical aids, see example # 45, analysis of claim 1 under step 2A prong 1, including: “Note that even if most humans would use a physical aid (e.g., pen and paper, a slide rule, or a calculator) to help them complete the recited calculation, the use of such physical aid does not negate the mental nature of this limitation.”; also see example # 49, analysis of claim 1, under step 2A prong 1: “Moreover, the recited mathematical calculation is simple enough that it can be practically performed in the human mind. Even if most humans would use a physical aid, like a pen and paper or a calculator, to make such calculations, the use of a physical aid would not negate the mental nature of this limitation.”.
As such, the claims recite a mental process.
Furthermore, as shown above, hierarchical coordinate systems are a codified part of the US legal system, at least at the state level, therefore they are certain method of organizing human activity, see MPEP § 2106.04(a)(2)(II)(B): “"Commercial interactions" or "legal interactions" include agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, and business relations” forming an important basis of at least property law for real estate. “An example of a claim reciting a commercial or legal interaction in the form of a legal obligation is found in Fort Properties, Inc. v. American Master Lease, LLC, 671 F.3d 1317, 101 USPQ2d 1785 (Fed Cir. 2012). The patentee claimed a method of "aggregating real property into a real estate portfolio, dividing the interests in the portfolio into a number of deedshares, and subjecting those shares to a master agreement." 671 F.3d at 1322, 101 USPQ2d at 1788.” In MPEP § 2106.04(a)(2)(II)(B).
Step 2A, prong 2
The claimed invention does not recite any additional elements that integrate the judicial exception into a practical application. Refer to MPEP §2106.04(d).
The following limitations are merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f), including the “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more”:
Claim 11 - A system comprising: at least one processor in communication with at least one non-transitory computer- readable medium having software instructions stored thereon, wherein, upon execution of the software instructions, the at least one processor is configured to perform a method comprising:… by at least one processor
…virtual…digital…simulated… dynamically … software…the at least one digital object comprising at least one computer instruction configured to integrate the at least one external software function into the at least one simulated area;- part of the mere instructions to use a computer, and generic computer components, as a tool to implement the abstract idea
… wherein the at least one digital object is configured to, upon interaction by a user in the virtual space, cause the at least one external integration to perform the at least one external software function associated with performing the at least one activity; - this is akin to “Intellectual Ventures I v. Capital One Fin. Corp., 850 F.3d 1332, 121 USPQ2d 1940 (Fed. Cir. 2017), the steps in the claims described "the creation of a dynamic document based upon ‘management record types’ and ‘primary record types.’" 850 F.3d at 1339-40; 121 USPQ2d at 1945-46. The claims were found to be directed to the abstract idea of "collecting, displaying, and manipulating data." 850 F.3d at 1340; 121 USPQ2d at 1946. In addition to the abstract idea, the claims also recited the additional element of modifying the underlying XML document in response to modifications made in the dynamic document. 850 F.3d at 1342; 121 USPQ2d at 1947-48. Although the claims purported to modify the underlying XML document in response to modifications made in the dynamic document, nothing in the claims indicated what specific steps were undertaken other than merely using the abstract idea in the context of XML documents. The court thus held the claims ineligible, because the additional limitations provided only a result-oriented solution and lacked details as to how the computer performed the modifications, which was equivalent to the words "apply it". 850 F.3d at 1341-42; 121 USPQ2d at 1947-48 (citing Electric Power Group., 830 F.3d at 1356, 1356, USPQ2d at 1743-44 (cautioning against claims "so result focused, so functional, as to effectively cover any solution to an identified problem"))” as discussed in MPEP § 2106.05(f)
The recitations of “structural programming” and associated portions, e.g. “executing” it, and “wherein structural programming comprises a computer programming language”, etc. are nothing more than the mere instruction to do it on a computer in the context of computer programming languages, generally linking to the field of use, and an insignificant computer implementation. E.g. ¶ 101 clarifies that it includes being a “markup language” – see the discussion of XML, which is a markup language (that is the “ML” in its abbreviation, which stands for eXtensible markup language), in MPEP § 2106.05(g, f and h).
The following limitations are adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g)
The “receiving…” and “obtaining…” limitations are mere data gathering, along with similar such recitations (e.g. “in response to at least one user interaction” and “receive the at least one user interaction”.
wherein the at least one digital object is configured to, upon interaction by a user in the virtual space, cause the at least one external integration to perform the at least one external software function associated with performing the at least one activity;- part of an insignificant extra-solution activity of mere data outputting, as well as being nominally/tangentially linked to the primary process of the claimed invention, as well as being recited in such a high degree of generality that this is no more than part of an insignificant computer implementation
and publishing, by the at least one processor, the simulated space for access via at least one computing device – mere data transmitting
Should it be found that the “3D” recitations in the presently claimed invention are not part of the abstract idea, then the Examiner notes these would be rejected as generally linking to a particular technological environment (3D instead of 2D); and part of the mere instructions to do it on a computer (as the claim places no restrictions on how these steps are to be performed in a particular technological manner, but are rather purely results-oriented limitations as discussed in MPEP § 2106.05(f)).
A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. See MPEP § 2106.04(d).
The claimed invention does not recite any additional elements that integrate the judicial exception into a practical application. Refer to MPEP §2106.04(d).
Step 2B
The claimed invention does not recite any additional elements/limitations that amount to significantly more.
The following limitations are merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f), including the “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more”:
Claim 11 - A system comprising: at least one processor in communication with at least one non-transitory computer- readable medium having software instructions stored thereon, wherein, upon execution of the software instructions, the at least one processor is configured to perform a method comprising:… by at least one processor
…virtual…digital…simulated… dynamically … software…the at least one digital object comprising at least one computer instruction configured to integrate the at least one external software function into the at least one simulated area;- part of the mere instructions to use a computer, and generic computer components, as a tool to implement the abstract idea
… wherein the at least one digital object is configured to, upon interaction by a user in the virtual space, cause the at least one external integration to perform the at least one external software function associated with performing the at least one activity; - this is akin to “Intellectual Ventures I v. Capital One Fin. Corp., 850 F.3d 1332, 121 USPQ2d 1940 (Fed. Cir. 2017), the steps in the claims described "the creation of a dynamic document based upon ‘management record types’ and ‘primary record types.’" 850 F.3d at 1339-40; 121 USPQ2d at 1945-46. The claims were found to be directed to the abstract idea of "collecting, displaying, and manipulating data." 850 F.3d at 1340; 121 USPQ2d at 1946. In addition to the abstract idea, the claims also recited the additional element of modifying the underlying XML document in response to modifications made in the dynamic document. 850 F.3d at 1342; 121 USPQ2d at 1947-48. Although the claims purported to modify the underlying XML document in response to modifications made in the dynamic document, nothing in the claims indicated what specific steps were undertaken other than merely using the abstract idea in the context of XML documents. The court thus held the claims ineligible, because the additional limitations provided only a result-oriented solution and lacked details as to how the computer performed the modifications, which was equivalent to the words "apply it". 850 F.3d at 1341-42; 121 USPQ2d at 1947-48 (citing Electric Power Group., 830 F.3d at 1356, 1356, USPQ2d at 1743-44 (cautioning against claims "so result focused, so functional, as to effectively cover any solution to an identified problem"))” as discussed in MPEP § 2106.05(f)
The recitations of “structural programming” and associated portions, e.g. “executing” it, and “wherein structural programming comprises a computer programming language”, etc. are nothing more than the mere instruction to do it on a computer in the context of computer programming languages, generally linking to the field of use, and an insignificant computer implementation. E.g. ¶ 101 clarifies that it includes being a “markup language” – see the discussion of XML, which is a markup language (that is the “ML” in its abbreviation, which stands for eXtensible markup language), in MPEP § 2106.05(g, f and h), wherein this is “well-known” (MPEP § 2106.05(g) cited respectively: “Intellectual Ventures I LLC v. Erie Indem. Co., 850 F.3d 1315, 1328-29, 121 USPQ2d 1928, 1937 (Fed. Cir. 2017) (the use of a well-known XML tag to form an index was deemed token extra-solution activity)”. Additional evidence in ¶ 101 with a generic listing of broad classes of programming languages.
The following limitations are adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g)
The “receiving…” and “obtaining…” limitations are mere data gathering, along with similar such recitations (e.g. “in response to at least one user interaction” and “receive the at least one user interaction”.
wherein the at least one digital object is configured to, upon interaction by a user in the virtual space, cause the at least one external integration to perform the at least one external software function associated with performing the at least one activity;- part of an insignificant extra-solution activity of mere data outputting, as well as being nominally/tangentially linked to the primary process of the claimed invention, as well as being recited in such a high degree of generality that this is no more than part of an insignificant computer implementation
and publishing, by the at least one processor, the simulated space for access via at least one computing device – mere data transmitting
Should it be found that the “3D” recitations in the presently claimed invention are not part of the abstract idea, then the Examiner notes these would be rejected as generally linking to a particular technological environment (3D instead of 2D); and part of the mere instructions to do it on a computer (as the claim places no restrictions on how these steps are to be performed in a particular technological manner, but are rather purely results-oriented limitations as discussed in MPEP § 2106.05(f)), and WURC in view of the below cited evidence, as well as the evidence discussed above at prong 1 (specifically, Bernstein; Modjeska; Tombers; Rusinkiewicz; Marschner; Buss; as well as Blender 3D Noob to Pro as all cited particularly above at prong 1).
In addition, the above insignificant extra-solution activities are also considered as well-understood, routine, and conventional activities, as discussed in MPEP § 2106.05(d):
The “receiving…” and “obtaining…” limitations are mere data gathering, along with similar such recitations (e.g. “in response to at least one user interaction” and “receive the at least one user interaction”) and are WURC in view of MPEP § 2106.05(d)(II): “iii. Electronic recordkeeping, Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 225, 110 USPQ2d 1984 (2014) (creating and maintaining "shadow accounts"); Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (updating an activity log); iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93;… i. Recording a customer’s order, Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1244, 120 USPQ2d 1844, 1856 (Fed. Cir. 2016);”
and publishing, by the at least one processor, the simulated space for access via at least one computing device - this is considered similar to the example WURC activity as discussed in MPEP § 2106.05(d)(II) of: “i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network);”
……virtual…digital…simulated… dynamically … software…the at least one digital object comprising at least one computer instruction configured to integrate the at least one external software function into the at least one simulated area … wherein the at least one digital object is configured to, upon interaction by a user in the virtual space, cause the at least one external integration to perform the at least one external software function associated with performing the at least one activity;; - this is WURC in view of MPEP § 2106.05(d)(II): “vi. A Web browser’s back and forward button functionality, Internet Patent Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015)… iv. Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93;…” – also see:
Dragan, Dinu, et al. "State of the Art in Virtual Reality Shops." Proceedings of the 8th International Conference on Mass Customization and Personalization in Central Europe MCP-CE. 2018. Abstract, then see §§ 2-3, include seeing page 77 col. 1, ¶¶ 2-3 and col. 2, ¶¶ 2-3, and page 78, col 1 ¶¶ 2-3 and col. 2 ¶¶ 1-2
Elordi, Unai, et al. "Virtual reality interfaces applied to web-based 3D E-commerce." Engineering Systems Design and Analysis. Vol. 44847. American Society of Mechanical Engineers, 2012. Page 3, col. 1, ¶¶ 1-3
Galketiya, M. K., et al. "Virtual Furniture Shop." 2018 13th International Conference on Computer Science & Education (ICCSE). IEEE, 2018. Page 452, the bullet points and last two paragraphs
Speicher, Marco, Sebastian Cucerca, and Antonio Krüger. "VRShop: a mobile interactive virtual reality shopping environment combining the benefits of on-and offline shopping." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1.3 (2017): 1-31. § 1 including the paragraph split between pages 2-3, also see §§ 2.1 and 2.3
Walczak, Krzysztof, Jacek Sokolowski, and Jakub Dziekoński. "Configurable virtual reality store with contextual interaction interface." 2018 11th International conference on human system interaction (HSI). IEEE, 2018. § 1 including page 28 last paragraph, and § II.B on page 30
Macinnes et al., US 2005/0081161 Al, ¶¶ 4-18, including ¶¶ 14-15
Furthermore, the use of hierarchical coordinate systems in 3D modeling, 3D rendering, and the like is a conventionally used abstract idea for these purposes, i.e. there is no inventive concept to be found in the way it is used in these claims.
E.g. Tombers, Paul A. "Teaching Advanced CAD Concepts." International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. Vol. 6234. American Society of Mechanical Engineers, 1991. Page 300, col. 2, ¶¶ 2-4: “One of the most fundamental concepts the advanced CAD student must understand is how to use. the CAD system's coordinate systems to generate a 100% accurate three dimensional model. Most CAD systems capable of generating three dimensional models use two types of coordinate systems, a global coordinate system and a local coordinate system…. The global coordinate system usually is a fixed, right-handed Cartesian coordinate system. For any given point on the model, the point will have a unique x, y, z coordinate that defines its position relative to the origin of the global coordinate system… A local coordinate system is a coordinate system that the CAD student can specifically define to help generate an entity or plane that would be difficult or impossible to define in the global coordinate system. The origin of the local coordinate system can be located anywhere in the global coordinate system and the
axes, x, y, and z, may be turned and tilted in any direction…”
E.g. Rusinkiewicz, “Scene Graphs &Modeling Transformations”, COS 426 Lecture Notes, Spring 2011, URL: www(dot)cs(dot)princeton(dot)edu/courses/archive/spr11/cos426/outline(dot)html – see the figure on slide 7 which shows such a hierarchical coordinate system; slide 8: “Allows definitions of objects in own coordinate systems”; see the figure on slide 14 including the parts annotated “Modeling Coordinates” and “World Coordinates”
E.g. Marschner, “3D Viewing”, CS 4620 Lecture 11 Slides, Cornell University, 2019, URL: www(dot)cs(dot)cornell(dot)edu/courses/cs4620/2019fa/ - see slide 5 which shows the use of hierarchical coordinate systems and the transformations between them as part the “Pipeline” in 3D rendering; see the remaining slides to further clarify, including slide 16 for : “Remember that geometry would originally have been in the object’s local coordinates; transform into world coordinates is called the modeling matrix”
E.g. Buss, “3D Computer Graphics – A mathematical Introduction with OpenGL”, May 2019, URL: www(dot)math(dot)ucsd(dot)edu/~sbuss/CourseWeb/Math155A_2019Winter/SecondEdDraft(dot)pdf – see page 32: “…We start with a vertex position x, usually as a point in 3-space, denoting a position for the vertex in local coordinates. For example, in the Ferris wheel example, to model a single chair of the Ferris wheel it might be the most convenient to model it as being centered at the origin. Vertices are specified relative to a “local coordinate system” for the chair. In this way, the chair can be modeled once without having to consider where the chairs will be placed in the final scene…. The Model matrix M transforms vertex positions x from local coordinates into world coordinates (also called global coordinates). The world coordinates can be viewed as the “real” coordinates. In the Ferris wheel example, multiple chairs are rendered at different positions and orientations on the Ferris wheel, since the chairs go around with the wheel plus they may rock back-and-forth. Each chair will have its own Model matrix M. The Model matrix transforms vertex positions x from local coordinates into global coordinates…” – and see the remaining portions of pages 32-34; then see figure II.1 on page 34.
E.g. Blender 3D Noob to Pro, Chapter: Coordinate Spaces in Blender, last edited on 2 June 2018, URL: en(dot)wikibooks(dot)org/wiki/Blender_3D:_Noob_to_Pro/Coordinate_Spaces_in_Blender – see the subsection on “Global and Local Coordinates”: “Blender refers to the coordinate system described above as the global coordinate system, though it's not truly global as each scene has its own global coordinate system. Each global coordinate system has a fixed origin and a fixed orientation, but we can view it from different angles by moving a virtual camera through the scene and/or rotating the camera. Global coordinates are adequate for scenes containing a single fixed object and scenes in which each object is merely a single point in the scene. When dealing with objects that move around (or multiple objects with sizes and shapes), it's helpful to define a local coordinate system for each object, i.e. a coordinate system that can move with, and follow the object. The origin of an object's local coordinate system is often called the center of the object although it needn't coincide with the geometrical center of the object. 3D objects in Blender are largely described using vertices (points in the object, singular form: vertex). The global coordinates of a vertex depend on: the (x, y, z) coordinates of the vertex in the object's local coordinate system the location of the object's center any rotation (turning) of the local coordinates system relative to the global coordinate system, and any scaling (magnification or reduction) of the local coordinate system relative to the global coordinate system.”
As such, the claims are directed towards a mental process without significantly more.
Regarding the dependent claims
Claim 2 recites another step in the mental process, but for the mere instructions to do it on a computer. In addition, should it be found that the “wherein the plurality of digital objects comprises at least one of: furniture, decorations, fixtures, furnishings, a clothing rack, store shelving, a cash register, or a vending machine” is not part of the abstract idea, then this would be considered as generally linking to a particular field of use
Claim 3 is further limiting the mental process for the “a cash register that integrates at least one payment service into the virtual space” but for the mere instructions to do it on a computer, as well as certain methods of organizing human activity (of both “Fundamental Economic Practices or Principles” and “Commercial or Legal Interactions” as discussed in MPEP § 2106.04(a)(2)(II), wherein should it be found that this is not part of the abstract idea then this would be generally linking to a particular field of use/technological environment, the recitation of “at least one data analytics model that integrates at least one data analytics service for user behavior tracking into the virtual space” is considered as certain methods of organizing human activity in view of the July 2024 Fed. Register Notice, in particular see the discussion of “Weisner v. Google LLC, 51 F.4th 1073, 1082 (Fed. Cir. 2022)”, as well as a mental process of mental observations of user behavior, but for the mere instrucitons to do it on a computer, and mentally evaluating the observations, e.g. a business owner observing people as they shop for pastries in a bakery, e.g. during the month of November, and evaluating whether the organization of the pastries is effective for the person shopping (e.g. mentally evaluating if the pumpkin pie should be moved to the center of the display, as many people partake in pumpkin pie in the month of November as visually observed), but for the mere instructions to do this on a computer. Should it be found that the data analytics model feature is not part of the abstract idea, then this would be generally linking to a particular technological environment of computers, given the generality recited in this limitation, and that it is merely stating what the digital object may comprise. The recitation of “a mobile chat bot…” feature is considered as a mental process, but for the mere instructions to do it on a computer, given the high level of generality recited in the present claims, and in view of the July 2024 Fed. Register notice for its discussion of “Trinity Info Media, LLC v. Covalent, Inc., 72 F.4th 1355, 1362 (Fed. Cir. 2023).” In addition, should it be found that this mobile chat bot feature is not part of the abstract idea, then this would be generally linking to the technological environment of computers, given the high level of generality recited in this limitation.
Also, the use of chat bots is WURC in view of Trappey, Amy JC, et al. "VR-enabled engineering consultation chatbot for integrated and intelligent manufacturing services." Journal of Industrial Information Integration 26 (2022): 100331. Page 2, col. 1, last two paragraphs: “For better customer service and experience, chatbots have been created and applied to many commercial web applications [16]. Corea [17] proposed the concept of a robot-insurance company, explaining how AI can combine insurance agents, customer services, health assistants, and provide medical advice to create fully automated medical insurance solutions. AI solutions can be used as a driver assistant to help reduce accidents. AI is changing the business models of financial service providers to innovate and seek alternative solutions to old problems, including personalized financial services using chatbots that act as virtual financial advisors [18]. Chatbots are widely applied in e-commerce, the service industries, restaurants, and airline industries yet few manufacturing companies have adopted chatbot applications for collaborative design and testing.”
Also, the use of data analytics models of user behavior tracking is WURC in view of Hernandez, Sergio, et al. "Analysis of users’ behavior in structured e-commerce websites." IEEE Access 5 (2017): 11941-11958. Abstract: “Online shopping is becoming more and more common in our daily lives. Understanding users' interests and behavior is essential to adapt e-commerce websites to customers' requirements. The information about users' behavior is stored in the Web server logs. The analysis of such information has focused on applying data mining techniques, where a rather static characterization is used to model users' behavior, and the sequence of the actions performed by them is not usually considered” – and see §§ I-II
The use of cash registers for providing payment services is considered WURC in view of Boucouvalas, Anthony C., Constantine J. Aivalis, and Kleanthis Gatziolis. "Integrating retail and e-commerce using Web Analytics and intelligent sensors." E-Business and Telecommunications: 12th International Joint Conference, ICETE 2015, Colmar, France, July 20–22, 2015, Revised Selected Papers 12. Springer International Publishing, 2016. § 1 ¶ 2: “Physical stores use multiple techniques in order to analyze client behavior. The majority of these techniques, are based on capturing the data at the cash register, during the checkout process, when the customer is about to”
Claim 4 recites additional steps in the mental process of mental evaluations/judgements, but for the mere instructions to do it on a computer, wherein the functional limitation of “so as to cause to appear that the at least one digital object provides the at least one external software function” is considered as part of the mere instructions to do this computer on a computer, given the high level of generality recited here, as well as generally linking to the technological environment of computers, as well as mere data displaying as an insignificant extra-solution activity, and WURC in view of MPEP § 2106.05(d)(II): “iv. Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93;” as well as Kužnar, Damjan, et al. "Virtual assistant platform." Informatica 40.3 (2016). §§ 1-2
Claim 5 recites another step of the mental process, but for the mere instructions to do it on a computer, given the high level of generality recited here. Neither the claims nor the disclose even describe what the graphical layers are (¶¶ 9-10, 19-20). To clarify on how this is a mental process, but for the mere instructions to do it on a computer, a person would readily be able to observe a layout and a coordinate system for a place, e.g. a coffee shop, e.g. by observing a 2D drawing of a layout on paper (or the display of a computer), and by observing the physical coffee shop and mentally visualizing it in 3D, e.g. by a simple coordinate system of height/width/length (to simplify, let’s say the shop is empty, and a cube). The person has mentally judged to add a cash register, or other furniture into the coffee shop, and they have a photograph of the coffee shop (example of a physical view). So the person makes a series of sketches (example of graphical layers), each sketch representing a piece of furniture to go into the coffee shop (on translucent paper), then overlays the sketches on the image of the coffee shop, thus mixing the layers with the physical view.
Also, see the note below for claim 6
Claim 6 recites another mental process step of determining a graphical layer, e.g. a person making an observation and sketching using pen and translucent paper a graphical layer of a sketch representing what they observed, but for the mere instructions to do this on a computer. The positioning step is similarly a mental process, given the high degree of generality recited here, and the rendering step is mere data displaying as an insignificant extra-solution activity recited at a high level of generality that is WURC in view of MPEP § 2106.05(d)(II), as well mere instructions to apply the abstract idea on a computer (see MPEP § 2106.05(f)(1): “Intellectual Ventures I v. Capital One Fin. Corp., 850 F.3d 1332, 121 USPQ2d 1940 (Fed. Cir. 2017), the steps in the claims described "the creation of a dynamic document based upon ‘management record types’ and ‘primary record types.’" 850 F.3d at 1339-40; 121 USPQ2d at 1945-46. The claims were found to be directed to the abstract idea of "collecting, displaying, and manipulating data." 850 F.3d at 1340; 121 USPQ2d at 1946. In addition to the abstract idea, the claims also recited the additional element of modifying the underlying XML document in response to modifications made in the dynamic document. 850 F.3d at 1342; 121 USPQ2d at 1947-48. Although the claims purported to modify the underlying XML document in response to modifications made in the dynamic document, nothing in the claims indicated what specific steps were undertaken other than merely using the abstract idea in the context of XML documents. The court thus held the claims ineligible, because the additional limitations provided only a result-oriented solution and lacked details as to how the computer performed the modifications, which was equivalent to the words "apply it". 850 F.3d at 1341-42; 121 USPQ2d at 1947-48 (citing Electric Power Group., 830 F.3d at 1356, 1356, USPQ2d at 1743-44 (cautioning against claims "so result focused, so functional, as to effectively cover any solution to an identified problem")).”)
WURC as well:
Dou, Hao, and Jiro Tanaka. "A mixed-reality shop system using spatial recognition to provide responsive store layout." International Conference on Human-Computer Interaction. Cham: Springer International Publishing, 2020. See figures 1-3, and § 1 ¶¶ 1-3
Jayananda, P. K. V., et al. "Augmented reality based smart supermarket system with indoor navigation using beacon technology (easy shopping android mobile app)." 2018 IEEE International Conference on Information and Automation for Sustainability (ICIAfS). IEEE, 2018. § 1, then see § II including subsections B-C (in particular, the discussion of “Unity” which is a WURC game engine used in AR and VR applications; for relevance see ¶ 52 of the instant disclosure), then see figures 7-9
Sharma, Ojaswa, et al. "Navigation in AR based on digital replicas." The Visual Computer 34 (2018): 925-936. See §§ 1-2, and figures 8-9
Agarwal et al., 11,989,834. Col. 2 and 19-20, and fig. 7
Furthermore, given this is layering graphical objects, in view of MPEP § 2106.07: “When evaluating a claimed invention for compliance with the substantive law on eligibility, examiners should review the record as a whole (e.g., the specification, claims, the prosecution history, and any relevant case law precedent or prior art) before reaching a conclusion with regard to whether the claimed invention sets forth patent eligible subject matter.” – see Int’l Bus. Machs. Corp. v. Zillow Grp., Inc., 50 F.4th 1371, 1381–82 (Fed. Cir. 2022) (emphasis in bold by Examiner): “The method it recites (selecting, organizing, and displaying visual information based on non-spatial attributes) has long been done by cartographers creating paper maps. See J.A. 2230 (a 1943 U.S. government cartography guide noting that "[v]ariations of color and the use of layer coloring and shading are now playing an important part for representing different features on the map"). The specification does not discuss improvements to specific types of device screens, only generic displays and computer systems… The problem the '389 patent purportedly solves—i.e., that the display of "any system that has large numbers of objects in many categories with relationships is difficult to understand," '389 patent at 2:22-24—is not specific to a computing environment. One could encounter this same occlusion problem when looking at a particularly cluttered paper map. And the patent's solution—to visually present information in a way that "aid[s] the user in distinguishing between the various displayed layers," id. at 2:25-28—could be accomplished using colored pencils and translucent paper, as the district court noted, Decision, 549 F. Supp. 3d. at 1263 . While moving objects between layers or [*1382] rearranging layers may take longer using the manual method, a patent that "automate[s] 'pen and paper methodologies' to conserve human resources and minimize errors" is a "quintessential 'do it on a computer' patent" directed to an abstract[*1383] idea. Univ. of Fla. Rsch. Found., Inc. v. Gen. Elec. Co., 916 F.3d 1363 , 1367 (Fed. Cir. 2019).”
Claim 7 recites a mental process (e.g. observing a person’s location mentally, then using a map of a building and drawing an “X” on the map for where the person was observed to be), but for the mere instructions to do it on a computer, as well as certain methods of organizing human activity (see the July 2024 Fed. Register Notice for its discussion of “Weisner v. Google LLC, 51 F.4th 1073, 1082 (Fed. Cir. 2022)”) but for the mere instructions to do it on a computer.
Claim 8 is considered as certain methods of organizing human activity in view of July 2024 Fed. Register Notice for its discussion of “Weisner v. Google LLC, 51 F.4th 1073, 1082 (Fed. Cir. 2022)” as well as “Elec. Commc'n Techs., LLC v. ShoppersChoice.com, LLC, 958 F.3d 1178, 1181 (Fed. Cir. 2020)” with mere instructions to do it using generic computer components as a tool to implement the abstract idea.
Claim 9 recites another step in the mental process, akin to the one recited in claim 1 and rejected under a similar rationale
Claim 10 is generally linking to a particular field of use
Claims 12-20 are rejected under a similar rationale as claims 2-10
As such, the claims are directed towards both a mental process and certain methods of organizing human activity without significantly more.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Elordi et al., “VIRTUAL REALITY INTERFACES APPLIED TO WEB-BASED 3D E-COMMERCE” in view of Behzadan, Amir H. ARVISCOPE: Georeferenced visualization of dynamic construction processes in three-dimensional outdoor augmented reality. Diss. University of Michigan, 2008.
Regarding Claim 1
Elordi teaches:
A method comprising: (Elordi, abstract, including: “…In this paper we present innovative interfaces that join the concepts of real shops and on-line e-commerce in a Web-based virtual reality platform…”)
receiving, by at least one processor, a simulation request comprising identifying a space to be simulated in at least one virtual environment having at least one virtual environment 3D coordinate system; obtaining, by the at least one processor, at least one digital layout representing at least one layout of at least one area of the space; wherein the at least one digital layout comprises: at least one simulated space layout corresponding to a simulated space associated with the space to be simulated in the at least one virtual environment, and at least one simulated area layout associated with at least one simulated area to simulate the at least one area of the space; wherein the at least one digital layout comprises structural programming defining, for the space, a plurality of relational mappings defining a plurality of spatial relationships between each of: the at least one virtual environment, the simulated space within the at least one virtual environment, and the at least one simulated area of the simulated space; (Elordi, section “Methodology”, including in the subsection “Shop Creation”: “Shop creation is the process of designing a new shop’s shape and appearance and placing virtual objects that will represent products for sale. This is the part that managers will use to create new shops and also to update their contents and distribution…We designed two ways to create a new shop. On the one hand there are parametric shop templates based on simple shapes like rectangles, L-shapes or free-form polygons. For any of those contour shapes, the designer can specify materials for ground, ceiling and walls and a wall height, and the system will create a shop geometry by extruding the contour vertically…To choose the desired creation method, when starting a new shop [i.e. the system has received a simulation request for a “new shop”, which is an example of an identified place to be simulated], the designer is presented with the following four options: Rectangular : This option creates a rectangular layout based on 3 editable parameters: width, depth and height. L-shaped : This creates a layout with the shape of an ‘L’ with selectable full length, full depth, short length, short height and height. Free-form layout : In this case a 2D grid is presented representing a top view of the space. The user can draw a polygon for the shop’s layout by clicking points on it with the mouse…”
to clarify on this having a 3D coordinate system, see subsection: “Administrator profile”: “Navigation in this profile is a fly over mode with no collision detection. The designer should be able to reach all parts of the scene easily and does not need a realistic walking experience. The user can use the keyboard to move around and the mouse to rotate the view. Mouse-based look can apply different mappings of mouse movements and button presses into rotations around the axes. We selected one based on mouse press and drag which maps horizontal mouse displacement to horizontal rotation speed and vertical mouse displacement to vertical rotation angle. Mixing of speed and position control can seem confusing but it is quite common and relatively natural. Object manipulation is the most demanding aspect for the manager with respect to interaction. Most 3D modelling applications have quite standard interactive manipulation controls to move, rotate and scale objects in all three axes freely…An object added to the scene once selected can be moved to a new location and rotated around its vertical axis .” – i.e. there is a virtual 3D coordinate system with “three axes” – to clarify, also see the section: “User profile” including: “The user can only walk around on the ground and look 360 around the vertical axis and 180 around a horizontal axis to look up and down to simulate real human motion”
to clarify on layouts of areas and spaces, the simulated space is a “shop” with the areas in the shop, e.g. the area with in fig. 2
as to the structural programming, claim later recites this is a “a computer programming language”, and see ¶ 101 for what this is, i.e. instructions to do it on a computer, e.g. Elordi, page 2: “The technology that all the interface is based on, is standard Web technology, not depending on any proprietary technology or add-ons: HTML5, CSS, JavaScript and WebGL” and page 3, col. 2, ¶ 3: “The architecture of our platform is distributed and relies heavily on dynamic content through HTML DOM manipulation and Ajax requests [11].”, etc., the Examiner noting the ”ML” in HTML stands for mark-up language (¶ 101 of the instant disclosure)
as to the relational mappings, see Elordi as discussed above, e.g. “An object added to the scene once selected can be moved to a new location and rotated around its vertical axis”, i.e. there are mapping in the coordinate space that specify the location of objects in the scene as related to the scene (the shop) itself by mapping onto the coordinate system of the shop, e.g. such that it can be moved to a new location and rotated around its vertical axis
to clarify on virtual vs simulated, the Examiner notes that Elrodi is creating “A virtual shop” (e.g. page 3, col. 2, ¶ 4: “A virtual shop is basically a structure that contains (i) one definition of the shape of the environment, (ii) a set of 3D static objects for decoration and (iii) a set of objects representative of products for sale. Decoration objects and products include positioning data that will be interactively editable in the design interface, and products have additional properties (reference ID, description, price, product class-dependent properties).”, and then simulates the interactions of users//administrators (section on page 5 titled “Interaction and navigation”), e.g. “For example, the shopping cart contents with partial and total price is displayed in a table. And adding objects to the scene is made by dragging thumbnails from a list on the panel and dropping them in the 3D world”, page 6, col. 1, ¶ 2: “The user profile is applied to all visitors of the shops and is targeted at navigating the shop as in real life, browsing, selecting products and buying…. Navigation here is a walk-type mode in first person view. The user can only walk around on the ground and look 360 around the vertical axis and 180 around a horizontal axis to look up and down to simulate real human motion… Customers must not be able to go trough walls, products or decoration so a collision detection mechanism is needed…”
mapping, by the at least one processor, the at least one area defined within the at least one digital layout to the at least one simulated area in the virtual space based at least in part on [the virtual environment 3D coordinate system] and the structural programming defining the plurality of relational mappings so as to create the simulated space comprising the at least one simulated space layout corresponding to the space and at least one simulated area layout corresponding to the at least one area of the space; (Elordi, as cited above, also see section “Methodology”, including in the subsection “Shop Creation” as discussed above, including: “We designed two ways to create a new shop. On the one hand there are parametric shop templates based on simple shapes like rectangles, L-shapes or free-form polygons. For any of those contour shapes, the designer can specify materials for ground, ceiling and walls and a wall height, and the system will create a shop geometry by extruding the contour vertically [example of mapping out the shop area/geometry].” – to clarify, see the subsection “Shop, product and decoration model management”: “The process so far creates an empty space for a shop. Now the administrator can add furniture and other decorative objects to the scene, and finally place the products that will be available for purchase.” – to further clarity, see fig. 2: “A snapshot of the administrator interface, showing the list of products which the current manager can add to the scene by a mouse drag & drop action” and subsection: “Interaction and navigation”: “Both administrators and regular users are presented with the virtual 3D scene of the shop…”
to clarify on the 3D coordinate system, see above, also see subsection: “Administrator profile”: “Navigation in this profile is a fly over mode with no collision detection. The designer should be able to reach all parts of the scene easily and does not need a realistic walking experience. The user can use the keyboard to move around and the mouse to rotate the view. Mouse-based look can apply different mappings of mouse movements and button presses into rotations around the axes. We selected one based on mouse press and drag which maps horizontal mouse displacement to horizontal rotation speed and vertical mouse displacement to vertical rotation angle. Mixing of speed and position control can seem confusing but it is quite common and relatively natural. Object manipulation is the most demanding aspect for the manager with respect to interaction. Most 3D modelling applications have quite standard interactive manipulation controls to move, rotate and scale objects in all three axes freely…An object added to the scene once selected can be moved to a new location and rotated around its vertical axis .” – i.e. there is a virtual 3D coordinate system with “three axes” – to clarify, also see the section: “User profile” including: “The user can only walk around on the ground and look 360 around the vertical axis and 180 around a horizontal axis to look up and down to simulate real human motion”
see above on the shop having areas in the shop, and it is a space of a shop
determining, by the at least one processor, at least one activity associated with the at least one simulated area; determining, by the at least one processor, at least one external software integration providing at least one external software function associated with performing the at least one activity; generating, by the at least one processor, the at least one digital object in at least one simulated area corresponding to the at least one external integration, the at least one digital object comprising at least one computer instruction configured to integrate the at least one additional function into the at least one simulated area; wherein the at least one digital object is configured to, upon interaction by a user in the virtual space, cause the at least one external software integration to perform the at least one additional function associated with performing the at least one activity; (Elordi, section “User behaviour”: “…Knowing about customer’s movements and actions in the shop [example of activities in the area] can help enhance the space and placement of goods to improve user experience and sales. Our concept platform includes functionality for this purpose…” then see section “Recording user actions”: “The application continually tracks the movements of the customer as he moves around the virtual shop. This collected information is sent to the server for storage and later analysis [the analysis is an example of a determined external software integration with the activity]” and section “Representation of logged user actions” including: “In this case, the administrator has to select one session and one user in order to replay it and watch how he moved or what he looked at. The player includes fast- forward functionality so the user can skip to a desired point in time by dragging a slider bar control [first example of an additional function associated with the performed activity, wherein the user/”administrator” interacts by selecting the section/user in order to generate a digital object of the “one user” associated with the performed activity] …First the system can draw a map of trajectories followed by users on the top view, either for individual users or for all of them together. Then, and more importantly, it can render what we call a heat map of the virtual environment. By superimposing the trajectories of multiple users in multiple sessions, the system paints the floor image with a colour map resulting in an image that depicts in hotter (i.e. red) colours the areas where most people have wandered [second example of an additional function associated with the performed activity]…” – to further clarify, see on pages 8-9 the section “User actions representation” including In this work a heat map image is presented as a representation of logged users’ behaviour. To compose this image, data from all past visitors must be processed…”and see fig. 5 which provides an “Image of resulting heat map from the motion of a set of users” which is a visual example of a generated digital object – and a skilled person would have inferred, given fig. 2 and given the role of the “administrator” in using this heat map function, that the administrator interacted with the virtual space to cause this heat map to appear
for another example, see page 6, col. 2, ¶ 2: “Customers can also interact with products in the shop. When a user clicks on an object in the scene [example of an activity] the interface displays associated information in a pop up panel [example of an external integration shown with a generated digital object] such as a picture of it, price, available stock, description and other properties as defined by the administrator. This panel includes controls to add the selected product to the shopping cart in whatever quantity. An additional panel lists the current contents of the cart and allow making the actual purchase operation. [examples of additional functions, wherein the user can interact to perform the function, e.g. performing the purchase operation]”
and publishing, by the at least one processor, the simulated space for access via at least one computing device, wherein the simulated space is configured to: receive the at least one user interaction with the simulated space;.. (Elordi, abstract: “The idea involves making the e-shopping experience more intuitive, closer to the experience of visiting a real shop, and providing on-line 3D interfaces for the entire process from shop design and product placement to customer experience analysis. We researched traditional marketing and sales techniques and the way they can be transferred to 3D virtual reality environments. Web 3D technologies allow us to implement interactive virtual worlds on the Web, accessible for everyone over different platforms and without special applications or plug-ins.” – i.e. it was published to the “Web” with a “client” “server” architecture (see section “Implementation and deployment issues” on page 7 for clarification)
as to user interactions, see above, e.g. section “User profile” and section “Interaction and navigation” as discussed above
While Elordi does not explicitly teach the following (including the use of two coordinate systems; as Elordi only teaches the use of one coordinate system as discussed above), Elordi in view of Behzadan teaches:
… a virtual space 3D coordinate system… wherein the virtual space 3D coordinate system is mapped to at least one position in the at least one virtual environment 3D coordinate system to locate the simulated space in the at least one virtual environment; wherein structural programming comprises a computer programming language that is configured to independently and dynamically resize at least one of the simulated space and the at least one simulated area independently of-the at least one virtual environment 3D coordinate system in response to at least one user interaction with the simulated space; … and execute, responsive the at least one user interaction, the structural programming to dynamically adjust at least one of a size, a position, or shape of at least one of the simulated space or the at least one simulated area. (Elordi, section “Methodology”, including in the subsection “Shop Creation” as discussed above, and fig. 2 as discussed above; also see section “User profile” of Eloridi as was cited above
In view of Behzadan, abstract and § 2.7 on pages 34-35 incl.: “Two major challenges had to be addressed during the design course of ARVISCOPE: • Converting global coordinates of an object to user’s local coordinate frame when updating the virtual contents of the augmented viewing frustum at each animation frame. • Converting local coordinates of an object to global coordinate frame when a dependent virtual object is about to be placed independently in the augmented scene [using multiple coordinate systems, wherein the “global coordinates” are in the highest level of the coordinate system hierarchy]. In this research, the global coordinate system is defined by three major axes: X which is parallel to the equator with its positive direction pointing towards the geographical east, Y which is perpendicular to the earth surface with its positive location pointing upwards, and Z which is parallel to the prime meridian with its positive direction pointing towards the geographical south. X, Y, and Z readings in the global coordinate system are used to determine the longitude, altitude, and latitude of the user as well as CAD objects in the global space [3D coordinate systems]. At the same time, each CAD object has its own unique local coordinate system. The definition of the three major axes in a local coordinate system varies between CAD objects and is mainly specified when the CAD model is created in a graphical software such as AutoCAD or 3D Studio. However, to make sure that CAD objects are placed in the global coordinate system with guaranteed consistency, their local coordinate frame should be oriented in a way that it is exactly aligned with the global coordinate frame (e.g. local X axis is parallel to the global X axis). While objects can be placed in the scene by defining their global coordinates (i.e. longitude, latitude, and altitude), all transformations (i.e. translations, and rotations) have to be done relative to the user [example of the mapping]. As a result, the application must be able to keep track of the latest local position and orientation of each CAD object inside the user’s coordinate frame. As shown in Figure 2.10, although the user and all CAD objects have their global position determined in the global coordinate system, CAD objects still have to be placed relative to the user’s eyes. Therefore, for each CAD object, an additional step has to be executed in order to calculate the relative distance between the user and that object and use this distance to construct or update the transformation matrix corresponding to that CAD object. Only after this matrix is calculated, the CAD object can be correctly placed inside the user’s viewing frustum.”
To further clarify, see chapter 4; in particular § 4.1 ¶¶ 1-2 and fig. 4.1 on page 102, along with its accompanying description; then see § 4.4 incl.: “The concept of scene graphs can be effectively used to facilitate the creation and management of assembled scenes. In general, a scene graph is a data structure used to organize and manage the contents of hierarchically organized scene data…”, e.g. fig. 4.4 on page 105; then see page 106 incl.: “Since the observer can constantly change position and/or head orientation in the animation, this viewpoint has to change accordingly which will cause the entire animated contents of the scene to change. In order to consider the latest changes in the observer’s global position and head orientation angles, a mobile root node is used in the adopted AR-based scene graph in this research [example of dynamically updating the simulated space in response to user interaction with it]…” – as clarified on pages 107-108 the paragraph split between the pages incl.: “While CAD objects may change their position and orientation due to them being individually manipulated in the scene, any change in user’s position and orientation will affect the entire scene graph and updates the augmented view immediately” then on page 108: “There are two coordinate systems involved in creating a scene graph: world (global) coordinate system, and object (local) coordinate system…However, in the AR-based scene graph shown in Figure 4.5, the position and orientation of the root node can potentially change over time due to any user’s movement in the augmented scene. As a result, the global position of the root node (i.e. the user) has to be acquired in every frame using accurate tracking devices and the entire scene has to be reconstructed based on the new positional and orientation values of the root node…”
To clarify on the resizing and adjusting, page 109 ¶ 1: “The AR scene is created by appropriately placing individual CAD objects each of which defined by a geometrical model. Each geometrical model is created in its own local coordinate frame, stored as a leaf node in the AR-based scene graph, and placed at an appropriate position and orientation inside the coordinate frame of its parent node. This is accomplished using transformation nodes. As shown in Figure 4.6, transformation nodes allow scene graph developers to manipulate the location (translation), orientation (rotation), and size (scale) of different nodes.” – to be specific, page 111, § 4.5 ¶ 1 and # 2: “Animating a scene graph is achieved by manipulating the values of the transformation nodes. Position, orientation, and scale of each component in the scene can be manipulated relative to its parent node. Updating these values at each frame leads to a dynamically changing scene. Since this is done continuously over time, a smooth animation is displayed to and viewed by the user…As described earlier, the relative position of the dozer can be changed due to two main reasons: 1) Dozer moves in the scene from one point to another, or 2) User’s global position changes and new GPS data is captured by the application.”
To further clarify And § 2.5.2: ¶ 1 “Statements in this group describe dynamic geometric transformations of scene objects. These transformations change the position, orientation, and scale (i.e. size) of objects in the 3D augmented space to depict the accurate motion objects undergo while performing operations.”
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings from Elordi on “The idea involves making the e-shopping experience more intuitive, closer to the experience of visiting a real shop, and providing on-line 3D interfaces for the entire process from shop design and product placement to customer experience analysis” which uses “virtual reality” (Elordi, abstract) with the teachings from Behzadan on “Visualization of simulated operations has traditionally been achieved in Virtual Reality (VR)… As the size and complexity of an operation increase, such data collection becomes an arduous, impractical, and often impossible task… This directly translates into loss of financial and human resources that could otherwise be productively used. In an effort to remedy this situation, this dissertation proposes an alternate approach of visualizing simulated operations using Augmented Reality (AR) to create mixed views of real existing jobsite facilities and virtual CAD models of construction resources. The application of AR in animating simulated operations has significant potential in reducing the aforementioned Model Engineering and data collection tasks, and at the same time can help in creating visually convincing output that can be effectively communicated.” (Behzadan, abstract) wherein this also includes “Another approach which is more effective for animating a graphical scene is to create the entire graphical contents from discrete CAD models…” (Behzadan, chapter 4, § 4.1 ¶ 2)
The motivation to combine would have been that: “This directly translates into loss of financial and human resources that could otherwise be productively used. In an effort to remedy this situation, this dissertation proposes an alternate approach of visualizing simulated operations using Augmented Reality (AR) to create mixed views of real existing jobsite facilities and virtual CAD models of construction resources. The application of AR in animating simulated operations has significant potential in reducing the aforementioned Model Engineering and data collection tasks, and at the same time can help in creating visually convincing output that can be effectively communicated.” (Behzadan, abstract).
An additional motivation to combine would have been that “This provides the modeler with high manipulation level to individually change position, orientation, and scale of each of the CAD objects without affecting the integrity of the scene. Moreover, the individual scene components can be created in separate modeling tools (e.g. AutoCAD, Photoshop, AC3D, 3D Studio). This brings significant flexibility and speed to the modeling and animation processes. In this research, discrete component modeling approach was selected as it results in more flexibility and higher manipulation control when animating a scene consisting of virtual and real objects” (Behzadan, chapter 4, § 4.1 ¶ 2)
Regarding Claim 2
Elordi teaches:
The method of claim 1, further comprising: generating, by the least one processor, a plurality of digital objects in the virtual space based at least in part on the at least one layout; wherein the plurality of digital objects comprises at least one of: furniture, decorations, fixtures, furnishings, a clothing rack, store shelving, a cash register, or a vending machine. (Elordi, as discussed above, including page 4, section: “Shop, product and decoration model management”: “The process so far creates an empty space for a shop. Now the administrator can add furniture and other decorative objects to the scene, and finally place the products that will be available for purchase… All these objects will be represented as 3D models in the scene, and they must be available in the server when the virtual shop is rendered in the browser.” – then see fig. 2 which shows some the furniture including a clothing rack for the folded shirts as visually depicted, as well as store shelving for the handbags”)
Regarding Claim 3
Elordi teaches:
The method of claim 1, wherein the at least one digital object corresponding to the at least one external integration comprises at least one of: a cash register that integrates at least one payment service into the virtual space, a mobile chat bot that integrates at least one artificial intelligence based chat service into the virtual space, or at least one data analytics model that integrates at least one data analytics service for user behavior tracking into the virtual space. (Elordi, as was discussed above, teaches an example of a data analytics model for user behavior tracking – see the section “User behaviour”: “…Knowing about customer’s movements and actions in the shop [example of an activity in the area] can help enhance the space and placement of goods to improve user experience and sales. Our concept platform includes functionality for this purpose…” then see section “Recording user actions”: “The application continually tracks the movements of the customer as he moves around the virtual shop. This collected information is sent to the server for storage and later analysis” and section “Representation of logged user actions” including: “…First the system can draw a map of trajectories followed by users on the top view, either for individual users or for all of them together. Then, and more importantly, it can render what we call a heat map of the virtual environment. By superimposing the trajectories of multiple users in multiple sessions, the system paints the floor image with a colour map resulting in an image that depicts in hotter (i.e. red) colours the areas where most people have wandered [second example of an additional function associated with the performed activity]…” – to further clarify, see on pages 8-9 the section “User actions representation” including In this work a heat map image is presented as a representation of logged users’ behaviour. To compose this image, data from all past visitors must be processed…”and see fig. 5 which provides an “Image of resulting heat map from the motion of a set of users” which is a visual example of a generated digital object – the heat map is an example of a data analytics model that integrates a data analytics services [the generation process of the heat map by analyzing the tracked user behavior])
Regarding Claim 4
Elordi teaches:
The method of claim 2, further comprising: determining, by the at least one processor, the at least one digital object associated with the at least one activity; and modifying, by the at least one processor, the at least one digital object in the at least one simulated area to define the at least one external software integration so as to cause to appear that the at least one digital object provides the at least one external software function. . (Elordi, as was discussed above, see the section “User behaviour”: “…Knowing about customer’s movements and actions in the shop [example of activities in the area] can help enhance the space and placement of goods to improve user experience and sales. Our concept platform includes functionality for this purpose…”
Then see page 6, col. 2, ¶ 2, as was discussed above: “Customers can also interact with products in the shop. When a user clicks on an object in the scene the interface displays associated information in a pop up panel such as a picture of it, price, available stock, description and other properties as defined by the administrator. This panel includes controls to add the selected product to the shopping cart in whatever quantity. An additional panel lists the current contents of the cart and allow making the actual purchase operation.”
Regarding Claim 5
Elordi, in view of Behzadan, teaches:
The method of claim 1, further comprising:
generating, by the at least one processor, a plurality of graphical layers visually representing the virtual space according to the at least one digital layout and the virtual space 3D coordinate system; wherein the plurality of graphical layers are configured to be dynamically rendered based at least in part on at least one computing so as to visually mix at least one portion of the virtual space with a physical view of the simulated space. (Elordi, as was taken in view of Behzadan above for the use of augmented reality based on scene graphs, see the above citations for claim 1; and to clarify in Behzadan also see § 2.7.2: “Another major challenge in designing ARVISCOPE statements capable of independently handling the CAD objects in the augmented scene was to design methods to convert local to global coordinate values. As described in Chapter 4, a virtual CAD object in ARVISCOPE is represented by a graphical node in the augmented scene graph. This node itself can be a parent node to some other child node(s). This hierarchical approach of defining children and parent nodes provides a multi-level structure to the augmented scene in which the highest level objects are those who have been initially placed in the scene by defining their global coordinates (i.e. longitude, latitude, and altitude) and each of the lower level objects has their coordinates defined in its parent’s local coordinate system. More details on how a hierarchical structure of CAD objects is created, maintained, and updated during the course of the AR animation are presented in Chapter 4. Figure 2.16 shows how the coordinate values of CAD objects in different layers of such a structure are defined… Figure 2.17 shows an example of how a child node is moved inside the animation by changing the transformation values of its parent node… In Figure 2.17, a CAD meta-object of a tower crane is shown in which the highest level parent node is the Base node. The Tower, Boom, and Cable nodes are connected in the scene graph to form the complete tower crane object. This is achieved by first placing the Base node globally in the scene. The position of the Tower is defined inside the local coordinate frame of the Base. The Boom is placed locally inside the Tower coordinate frame. Also, the Cable node is a child of the Boom node. As shown in this Figure, a Steel Section has been defined as a child object to the Cable and hence is transformed (i.e. picked up, rotated, and installed) in the augmented scene by manipulating the transformational values of its immediate parent node (i.e. Cable node). As soon as the steel section is in its final position and has to be separated from the cable and installed on the foundation, its global coordinate values have to be calculated and used to find its relative transformation to the user for subsequent animation frames. The problem of conversion from local to global coordinate values has been addressed in this research using the concept of “transformation chain” which is shown in Figure 2.18… In this Figure, to find the transformation values of the lowest level node (i.e. end gripper of the robotic arm) inside the coordinate frame of the highest level node (which is the global coordinate frame), and considering the fact that the definition of different axes in different levels of the hierarchy is consistent, all the transformation matrices connected to nodes at different layers (starting from the lowest level node to the highest level node) are multiplied together.”
The rationale to combine is the same as discussed above for claim 1.
Regarding Claim 6
Elordi, in view of Behzadan, teaches:
The method of claim 2, further comprising:
determining, by the at least one processor, at least one graphical layer associated with the at least one digital object in the at least on the simulated space; positioning, on at least one display of the at least one computing device, by the at least one processor, the at least one graphical layer to align the simulated space with the at least one area corresponding to the space based at least in part on the user; and rendering, by the at least one processor, the at least one graphical layer so as to superimpose the at least one digital object within the simulated space. (Elordi, in view of Behzadan as cited above for claim 1 and claim 5 citations for clarifying on the layers, e.g. Behzadan §§ 2.7 and 4.1 and fig. 4.1 for the “user’s viewing frustum” [which includes determining a view orientation of the user, e.g. fig. 4.09-4.11 and their accompanying descriptions stating on page 114 for “A very similar procedure is applied to the case in which the user’s head orientation changes” – and see § 2.7.2 as cited above for a clarification on the layers in the animation)
The rationale to combine is the same as discussed above for claim 1.
Regarding Claim 7
Elordi teaches:
The method of claim 1, further comprising:
determining, by the at least one processor, a simulated user location within the virtual space associated with at least one physical user location based at least in part on the mapping of the at least one simulated area of the simulated space; and generating, by the at least one processor, a simulated user location representation representing the simulated user location within the simulated space so as to reflect the at least one physical user location. (Elordi, section “User profile”: “…Navigation here is a walk-type mode in first person view. The user can only walk around on the ground and look 360 around the vertical axis and 180 around a horizontal axis to look up and down to simulate real human motion… Customers must not be able to go through walls, products or decoration so a collision detection mechanism is needed. By shooting a ray from the camera position to the user’s forward direction and finding its intersection with the scene the navigation system can detect obstacles that would prevent walking forward. If we want to go upstairs, we need a gravity vector. Similarly, for floor following a vertical ray is shot downwards.”
For a second example, see the section; “Representation of logged user actions”: “…As the system has collected a continuous stream of position and orientation samples with time stamps, it is able to fully reproduce a client’s session as if it was recorded by a surveillance camera. We implemented an interactive VCR-like session player that replays a user’s own view as he browsed the VR shop. In this case, the administrator has to select one session and one user in order to replay it and watch how he moved or what he looked at. The player includes fast- forward functionality so the user can skip to a desired point in time by dragging a slider bar control….”)
Regarding Claim 8
Elordi teaches:
The method of claim 7, further comprising:
receiving, by the at least one processor, a location indication from at least one mobile device associated with the user. (Elordi, section “User profile” and section “Representation of logged user actions” as discussed above for claim 7, then see the section “Implementation and deployment issues”: “Quick tests were also performed on a Samsung Galaxy Tab tablet [example of a mobile device] which showed too slow performance, achieving 1 to 3 fps. This suggests that Web 3D applications like this one need a specially adapted low complexity version to be usable on mobile devices” – i.e. this was used on a mobile device for the user input
also see in the conclusion section, last paragraph, the following suggestion: “Another interesting advance would be the support of mobile devices such as tablets and the exploitation of their motion sensors as an interaction mode. Some tests were already made which worked but resulted in extremely slow performance of the 3D rendering.”
In addition, the Examiner notes that while the “mid-range PC” (page 8, col. 1, ¶ 2) appears to be a desktop PC, the Examiner notes that it would have been obvious to have used a mobile device for this, e.g. a laptop or a smartphone, as this would have been “Making portable” the client device (MPEP § 2144.04(V)(A)).
Regarding Claim 9
Elordi teaches:
The method of claim 1, further comprising: mapping, by the at least one processor, a second area to a second simulated area in the virtual environment based at least in part on the virtual space 3D coordinate system so as to create a second simulated space of a second place in the virtual environment. (Elordi, page 4, as discussed above, including: “…For any of those contour shapes, the designer can specify materials for ground, ceiling and walls and a wall height, and the system will create a shop geometry by extruding the contour vertically….Free-form layout : In this case a 2D grid is presented representing a top view of the space. The user can draw a polygon for the shop’s layout by clicking points on it with the mouse. A point clicked close to the first point closes the polygon.”, then page 4, col. 2, ¶ 2: “The process so far creates an empty space for a shop.” – then see page 5, col. 1, ¶ 4: “…An administrator must be able to add and precisely place objects in the scene, both decorative and products, as well as move, rotate or remove them….” - i.e. this maps a second location to create a second place when objects are added, e.g. adding the shelving visually depicted in fig. 2 maps the second location of where the shelving goes to the shelving model, and thus creates a second place in the virtual environment (e.g. the top of the shelf)
Regarding Claim 10.
Elordi teaches: The method of claim 1, wherein the place comprises a physical location of a commercial entity. (Elordi, abstract: “In this paper we present innovative interfaces that join the concepts of real shops and on-line e-commerce in a Web-based virtual reality platform.” And see the section “Shop creation”: “Shop creation is the process of designing a new shop’s shape and appearance and placing virtual objects that will represent products for sale. This is the part that managers will use to create new shops and also to update their contents and distribution”
Regarding Claim 11.
Claim 11 is rejected under a similar rationale as claim 1 above, wherein Elordi teaches:
A system comprising: at least one processor in communication with at least one non-transitory computer- readable medium having software instructions stored thereon, wherein, upon execution of the software instructions, the at least one processor is configured to perform a method comprising: (Elordi, abstract, including: “…In this paper we present innovative interfaces that join the concepts of real shops and on-line e-commerce in a Web-based virtual reality platform…”)
Regarding Claim 12.
This is rejected under a similar rationale as claim 2 above.
Regarding Claim 13.
This is rejected under a similar rationale as claim 3 above.
Regarding Claim 14.
This is rejected under a similar rationale as claim 4 above.
Regarding Claim 15.
This is rejected under a similar rationale as claim 5 above.
Regarding Claim 16.
This is rejected under a similar rationale as claim 6 above.
Regarding Claim 17.
This is rejected under a similar rationale as claim 7 above.
Regarding Claim 18.
This is rejected under a similar rationale as claim 8 above.
Regarding Claim 19.
This is rejected under a similar rationale as claim 9 above.
Regarding Claim 20.
This is rejected under a similar rationale as claim 10 above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID A. HOPKINS whose telephone number is (571)272-0537. The examiner can normally be reached Monday to Friday, 10AM to 7 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ryan Pitaro can be reached at (571) 272-4071. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/David A Hopkins/Primary Examiner, Art Unit 2188