DETAILED ACTION
This action is in response to the Applicant Remarks received on September 22, 2025. Claims 1-20 are pending with no claims canceled, and claims 1-2, 5, 7-9, 11-12, 15, 17-19 currently amended.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. As summarized in the 2019 Revised Patent Subject Matter Eligibility Guidance, examiners must perform a Two-Part Analysis for Judicial Exceptions.
Step 1
In Step 1, it must be determined whether the claimed invention is directed to a process, machine, manufacture, or composition of matter. The instant invention encompasses a system (i.e., machine) in claims 1-10 and method (i.e., process) in claims 11-20 for remotely communicating instructions. All claims are directed to one of the four statutory categories and meet the requirements of Step 1.
Step 2A
Prong One
The claimed invention is directed to an abstract idea without significantly more. The instant invention is broadly directed to a system and method for “remotely communicating instructions with augmented and virtual reality objects…” (Specification, p. 2, [0002]).
Claim 11 (Currently Amended) recites the following (with emphasis added):
A method for remotely communicating instructions, the method performed by a server communicatively coupled to a first user device and an instructor device, the server, the first user device, and the instructor device individually comprising a memory and a processor, the method comprising:
receiving vicinity information from the first user device, the vicinity information characterizing a location of the first user device,
wherein the vicinity information defines visual content captured by the first user device;
receiving target information, the target information defining a physical measurement of a target person or location requesting assistance;
receiving tool information, the tool information defining a status of a tool, wherein the server receives the tool information based on the physical measurement of the target person or location;
transmitting at least a portion of the vicinity information, the target information, and the tool information to the instructor device, the instructor device configured to present the visual content within an instructor interface based on the received vicinity information and receive input from an instructor through the instructor interface, the input defining an instruction associated with the visual content;
receiving instruction information defining the instruction associated with the visual content from the instructor device,
wherein the instruction information is based at least in part on the target information and the tool information; and
transmitting at least a portion of the instruction information to the first user device, the first user device configured to present the instruction overlaid on top of the visual content within a first instructee interface based on the received instruction information.
Claim 11 (Currently Amended) encompasses the abstract idea and had substantially similar features as claim 1 (Currently Amended), which is also encompassed by the dependent claims 2-10 and 12-20.
Claims 1-20 recite the steps for communicating instructions from an instructor to an instructee using remotely connected devices. The system and method are directed to mental processes and certain methods of organizing human activity. A human – using pen and paper – is capable of noting the location of a device, measuring a target, identifying a status of a tool, and using that information while communicating an instruction to an instructee. These limitations, when given their broadest reasonable interpretation, recite collecting, analyzing, and sending data pertaining to instructing a user. Thus, the steps are directed to mental processes and certain methods of organizing human activity.
Prong Two
This judicial exception is not integrated into a practical application because mere instruction to implemented on a computer, or merely using a computer as a tool to perform the abstract idea, adding insignificant extra solution activity, and/or generally linking the use of the abstract idea to a technological environment or field is not considered integration into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the present claims include no additional elements other than the abstract idea which include numerous personal devices. The conventional computers over a generic network as presented are directed to the components of a system amount to merely field of use type limitations and/or extra solution activity to implement the mental processes and certain methods for organizing human activity for communicating instructions to users.
Step 2B
Step 2B in the analysis requires us to determine whether the claims do significantly more than simply describe that abstract method. Mayo, 132 S. Ct. at 1297. We must examine the limitations of the claims to determine whether the claims contain an "inventive concept" to "transform" the claimed abstract idea into patent-eligible subject matter. Alice, 134 S. Ct. at 2357 (quoting Mayo, 132 S. Ct. at 1294, 1298). The transformation of an abstract idea into patent-eligible subject matter "requires ‘more than simply stat[ing] the [abstract idea] while adding the words ‘apply it.’’" Id. (quoting Mayo, 132 S. Ct. at 1294) (alterations in original). "A claim that recites an abstract idea must include ‘additional features’ to ensure ‘that the [claim] is more than a drafting effort designed to monopolize the [abstract idea].’" Id. (quoting Mayo, 132 S. Ct. at 1297) (alterations in original). Those "additional features" must be more than "well-understood, routine, conventional activity." Mayo, 132 S. Ct. at 1298.
The present claims do not include the additional elements that are sufficient to amount to significantly more than the judicial exception. Any potentially technical aspects of the claims are well-known, generic computational components performing conventional functions (e.g., a computer performing generic data retrieval and generation). The present claims have been analyzed both individually and in combination and, the instant claims do not provide any improvement of the functioning of the computer or improvement to computer technology or any other technical field. There do not appear to be any meaningful limitations other than those that are well-understood, routine, and conventional in the field. Thus, the present claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claims are not patent eligible.
The claims are generally linked to implement an abstract idea on personal devices. When looked at individually and as a whole, the claim limitations are determined to be an abstract idea without "significantly more," and thus not patent eligible.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claims 1-3, 5-9, 11-13, and 15-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Montgomerie [US20160291922A1].
Regarding claim 1 (Currently Amended), Montgomerie discloses:
A system for remotely communicating instructions (Montgomerie, [0065], “…one or more students may share a view of an instructor within an AR environment and may receive instruction from the instructor, as shown in FIG. 3.”), the system comprising:
a server that communicatively couples to a first user device and an instructor device (Montgomerie, [0047], “The remote expert may act as a server, accepting and managing incoming connections from a local user or possibly multiple local users simultaneously. The local user may act as a terminal, possibly connecting to the server (either locally or remotely) and possibly transmitting image data to the server. This relationship also may be reversed, such that the local user may act as a server if needed.”),
the server, the first user device, and the instructor device individually comprising a memory and a processor, the server configured to:
receive vicinity information from the first user device (Montgomerie, [0074], “Upon [the remote expert] receiving the serialized data [from the field service technician], …”),
the vicinity information characterizing a location of the first user device (Montgomerie, [0074], “… the 3D renderer 480 may update a background image such that an expert may see a field service technician's view from the “Local user/Field service” view 400. The 3D renderer 480 may then update the rendered scene's camera view according to the AR coordinates for that scene …”),
wherein the vicinity information defines visual content captured by the first user device (Montgomerie, [0074], “…and may update the positions and existence of 3D objects in the scene.”);
receive target information (Montgomerie, [0160], “Referring to FIG. 14, in one embodiment of an Object Recognition Service 1400, an AR device equipped with a camera 1410 may capture an image of an object…”),
the target information characterizing a physical measurement of a target person or location requesting assistance (Montgomerie, [0160], “That is, for every pixel in the captured image, a depth measurement may also be present, thus possibly giving a 3D mesh for the captured image.”);
receive tool information (Montgomerie’s SDK cited directly below is the method of receiving tool information),
the tool information characterizing a tool (Montgomerie, [0143], “To support animation of such interactions, the Scope SDK may include definitions of a plurality of tools and other objects with predefined interaction parameters.”),
wherein the server selects the tool based on the physical measurement of the target person or location (Montgomerie, [0011], (Emphasis added) “… one method comprises: receiving combined augmented reality (AR) coordinates, 3D object information and encoded video frame from client, update a background image based on the encoded video frame, updating a scene view according to the AR coordinates, updating positions and existence of the 3D objects, loading 3D content and instructions from cloud storage, creating an updated rendered view by combining the loaded 3D content and the background image, and returning the updated rendered view to the client.”);
transmit at least a portion of the vicinity information, the target information, and the tool information to the instructor device (Montgomerie, [0010], “…combining the AR coordinates, 3D object information and encoded video frame, and transmitting the combined data to expert.”),
the instructor device configured (Montgomerie, Fig 4, “Expert View 470”) to
present the visual content within an instructor interface based on the received vicinity information (Montgomerie, [0073], “The renderer 430 may then combine the AR coordinates, 3D object information and encoded video frame and may serialize the combined data to pass to the “Expert” view 470 by way of the network 460.” and Montgomerie, [0074], “Upon receiving the serialized data, the 3D renderer 480 may update a background image such that an expert may see a field service technician's view…”) and
receive input from an instructor through the instructor interface (Montgomerie, [0074], “The 3D renderer 480 may then update the rendered scene's camera view according to the AR coordinates for that scene, and may update the positions and existence of 3D objects in the scene. The expert can create new content by adding 3D models or annotating the rendered view through drawing, highlighting or other annotations…”),
the input defining an instruction associated with the visual content (Montgomerie, [0067], “A user interface may be provided for selecting content (3D models, images, videos, text) to illustrate the step-by-step visual instructions.”);
receive instruction information defining the instruction associated with the visual content from the instructor device (Montgomerie, [0065], “…one or more students may share a view of an instructor within an AR environment and may receive instruction from the instructor, as shown in FIG. 3.”),
wherein the instruction information is based at least in part on the target information and the tool information (Montgomerie, [0067], “The Scope SDK may provide software tools to support creation of step-by-step visual instructions in Augmented Reality (AR) and Virtual Reality (VR).”); and
transmit at least a portion of the instruction information to the first user device (Montgomerie, [0065], “…one or more students may share a view of an instructor within an AR environment and may receive instruction from the instructor, as shown in FIG. 3.”),
the first user device configured to present the instruction overlaid on top of the visual content within a first instructee interface based on the received instruction information (Montgomerie, [0074], “The expert can create new content by adding 3D models or annotating the rendered view through drawing, highlighting or other annotations, and the updated rendered view is returned to the “Local user/Field service” view 400 by way of the network 460.”).
Regarding claim 2 (Currently Amended), Montgomerie discloses:
The system of claim 1, wherein the physical measurement of the target person or location is obtained by using the tool (Montgomerie, [0159], “Accordingly, the Scope SDK may provide a mechanism to recognize such objects during an AR interaction. Two such mechanisms according to embodiments of the present disclosure are discussed below with respect to FIGS. 14 and 15.” and Montgomerie, [0159], “Referring to FIG. 14, in one embodiment of an Object Recognition Service 1400, an AR device equipped with a camera 1410 may capture an image of an object, … [and], for every pixel in the captured image, a depth measurement may also be present, thus possibly giving a 3D mesh for the captured image.”).
Regarding claim 3 (Original), Montgomerie discloses:
The system of claim 1, wherein the tool is communicatively coupled with one or more of the first user device and the instructor device (Montgomerie, [0046], “The SDK also may define a set of standard tools (e.g., wrenches, screwdrivers, etc.) that may be loaded by the remote expert to the local user or vice versa.”).
Regarding claim 5 (Currently Amended), Montgomerie discloses:
The system of claim 4, wherein the physical measurement of the target person or location is a real-time measurement of one or more of
a human condition and
a human function (Montgomerie, [0169], “Embodiments of the present disclosure may provide maintenance (e.g., automotive, machinery, aircraft, etc.) support…”).
Regarding claim 6 (Original), Montgomerie discloses:
The system of claim 1, wherein the instructor interface includes a change option to change the instruction (Montgomerie, Fig 4, “Expert can create new content by adding 3D Models or “Drawing” or highlighting” and Montgomerie, Fig 7, Sequence Editor 700 and Montgomerie, Fig 17, “Expert annotates video image with 3D models, drawing, text, documentation, etc. 1710”) and
the server is further configured to
facilitate exchange of the change to the instruction between the instructor device and the first user device (Montgomerie, Fig 4 “Update view of client via network” and Montgomerie, Fig 17, “Annotated video image transmitted from expert to user 1720”).
Regarding claim 7 (Currently Amended), Montgomerie discloses:
The system of claim 1, wherein the presentation of the instruction by the first user device includes a visual representation of a usage of a tool (Montgomerie, [0145], “For example, the Scope SDK may define interaction points for wrench 1100, as shown in FIG. 11.”) with respect to the target person or location (Montgomerie, [0151-0152], “An exemplary wrench model 1100 may also define an up point 1150. These points on the wrench may interact with corresponding points on a nut 1200, as shown in FIG. 12.”).
Regarding claim 8 (Currently Amended), Montgomerie discloses:
The system of claim 7, wherein the physical measurement of the target person or location is a last measured reading of one or more of
a human condition and
a human function (Montgomerie, [0157], “These points on the nut may interact with corresponding points on a wrench 1100 as shown in FIG. 11.” In other words, the physical measurements of the nut are a last measured reading of various components related to the human function of using a wrench to tighten the nut.).
Regarding claim 9 (Currently Amended), Montgomerie discloses:
The system of claim 7, wherein the instructor interface includes a tool option to allow the instructor to interact with a visual representation of the tool to define the instruction on the usage of the tool with respect to the target person or location (Montgomerie, [0063], “as shown in FIG. 19, may include receiving combined AR coordinates, 3D object information and encoded video frame from user (1900), updating a background image (1905), update the rendered scene's camera view according to the AR coordinates (1910), updating the positions and existence of 3D objects (1915), loading 3D content and instructions from cloud storage (1920), creating new content by adding 3D models or annotating the rendered view (1925) and returning the updated rendered view to the user (1930).”),
wherein the instruction on the usage of the tool is based on a physical measurement of a second tool (Montgomerie, [0160], “Referring to FIG. 14, in one embodiment of an Object Recognition Service 1400, an AR device equipped with a camera 1410 may capture an image of an object, may analyze the captured image to generate a point cloud representing the object and may transmit the point cloud data to a cloud service 1420. That is, for every pixel in the captured image, a depth measurement may also be present, thus possibly giving a 3D mesh for the captured image.”).
Claims 11-13 and 15-19 recite similar limitations to claims 1-3 and 15-19. For citations on prior art, see rejection of claims 1-13 and 15-19 above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claims 4, 10, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Montgomerie and Merjanian [US20190373304A1].
Regarding claim 4 (Original), Montgomerie discloses the use of tool information (as cited above), but Montgomerie does not explicitly disclose including an operational status of a tool within the tool information.
Merjanian, however, discloses:
The system of claim 1, wherein the tool information includes an operational status of the tool (Merjanian, [0056], “Based on the determination that the item [or tool] is available for use at the location 110, the instructor device 140 may present the visual representation of item within the instructor interface for selection by the instructor 142.” A tool being available for use is indicative of the tools operability.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the augmented reality communication exchange system of Montgomerie the operational status of a tool as taught by Merjanian as one of ordinary skill in the art would have recognized that applying the technique of Merjanian would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Merjanian to the teachings of Montgomerie would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such data processing features into similar systems. Further, ascertaining the operational status of a tool used in Montgomerie would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow an instructor to provide more efficient and relevant assistance by only suggesting tools available to the user for completing the necessary instruction.
Regarding claim 10 (Original), Montgomerie discloses the use of visual representations of tools presented in the instructor interface (as cited above), but Montgomerie does not explicitly disclose the visual representation of the tool being presented in the instructor interface based on a determination that the tool is available for use at the location and a determination that a status of the tool indicates operability.
Merjanian, however, discloses:
The system of claim 9, wherein the visual representation of the tool is presented in the instructor interface based on a determination that the tool is available for use at the location (Merjanian, claim 10, “The system of claim 9, wherein the visual representation of the item is presented in the instructor interface based on a determination that the item is available for use by a user at the location.”), and a determination that a status of the tool indicates operability (Merjanian, [0056], “Based on the determination that the item [or tool] is available for use at the location 110, the instructor device 140 may present the visual representation of item within the instructor interface for selection by the instructor 142.” A tool being available for use is indicative of the tools operability.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the augmented reality communication exchange system of Montgomerie the visual representation of a tool based on the availability and usability of the tool at the user location as taught by Merjanian as one of ordinary skill in the art would have recognized that applying the technique of Merjanian would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Merjanian to the teachings of Montgomerie would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such data processing features into similar systems. Further, ascertaining the availability and usability of the tool at the user location used in Montgomerie would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow an instructor to provide more efficient and relevant assistance by only suggesting tools available to the user for completing the necessary instruction.
Claims 14 and 20 recite similar limitations to claims 4 and 10. For citations on prior art, see rejection of claims 4 and 10 above.
Response to Arguments
Applicant's arguments filed September 22, 2025 have been fully considered but they are not persuasive.
Regarding the rejection of claims 1-20 under 35 U.S.C. 101, in response to the Applicant’s arguments on pages 6-12 of the Remarks that the “claims cannot be performed in a mental process” (Remarks, page 8, para 2), the Examiner respectfully submits that a claim that requires a computer may still recite a mental process (See MPEP 2106.04(a)(2)(III)(C)). As claimed, claims 1-20 remain rejected under 35 U.S.C. 101 for reciting mental processes.
For example, claim 11 (Currently Amended) recites, “receiving … information,” ” “transmitting … the … information”, “present[ing] the visual content”, “receiv[ing] input from an instructor”, “receiving instruction information”, and “transmitting … the instruction information”. These actions, as claimed, are analogous to how a person receives information and how the person passes along the information. Any devices claimed do not perform functions that disclose an improvement to the technology.
Regarding the rejection of claims 1-20 under 35 U.S.C. 101, in response to the Applicant’s arguments on pages 6-12 of the Remarks that the “claims are not directed to certain methods of organizing human activity” (Remarks, page 9, para 3), the Examiner respectfully submits that the MPEP states that these methods relate to, among others, “managing personal behavior or relationships or interactions between people (See MPEP 2106.04(a)(2)(II)(C)). As stated in MPEP 2106.04(a)(2)(II)(C) (Emphasis added), “The sub-grouping ‘managing personal behavior or relationships or interactions between people’ include social activities, teaching, and following rules or instructions.” Claim 1 (Currently Amended) begins by reciting (Emphasis added), “A system for remotely communicating instructions…”, and Claim 11 (Currently Amended) begins by reciting (Emphasis added), “A method for remotely communicating instructions…”. As claimed, the limitations recite instructions for a person to provide instructions.
For further information on the rejection of the claims under 35 U.S.C. 101, see the corresponding section above.
Regarding the rejection of claims 1-20 under 35 U.S.C. 102, in response to the Applicant’s arguments on pages 13-15 of the Remarks that “Montgomerie does not disclose a target person or location requesting assistance” (Remarks, page 14, para 2), the Examiner respectfully submits the citations provided in the corresponding section above. Specifically, the Applicant argues, “Montgomerie does not provide any indication that the system can take a physical measurement of a person or location requesting assistance.” The Examiner respectfully submits that Montgomerie discloses (Emphasis added), “Referring to FIG. 14, in one embodiment of an Object Recognition Service 1400, an AR device equipped with a camera 1410 may capture an image of an object, may analyze the captured image to generate a point cloud representing the object and may transmit the point cloud data to a cloud service 1420. That is, for every pixel in the captured image, a depth measurement may also be present, thus possibly giving a 3D mesh for the captured image” (Montgomerie, [0160]). As cited, Montgomerie may capture an image of an object (i.e., a target person or location) and present a depth measurement (i.e., a physical measurement); therefore, Montgomerie discloses the ability to take a physical measurement of a target object.
Furthermore, the Applicant asserts, “The system in Montgomerie does not receive tool information based on physical measurements of a person or location requesting assistance” (Remarks, page 14, para 3). The Examiner respectfully submits that, as cited in the previous paragraph, Montgomerie discloses (Emphasis added), “Referring to FIG. 14, in one embodiment of an Object Recognition Service 1400, an AR device equipped with a camera 1410 may capture an image of an object, may analyze the captured image to generate a point cloud representing the object and may transmit the point cloud data to a cloud service 1420. That is, for every pixel in the captured image, a depth measurement may also be present, thus possibly giving a 3D mesh for the captured image” (Montgomerie, [0160]). As cited, Montgomerie may capture an image of an object (i.e., a person or location), present a depth measurement (i.e., a physical measurement), and transmit the object information (i.e., tool information) to a cloud service (i.e., the system/server). Moreover, the object information is based on the depth measurement; therefore, Montgomerie discloses the tool information is based on the physical measurement.
For further information on the rejection of the claims under 35 U.S.C. 102, see the corresponding section above.
Regarding the rejection of claims 1-20 under 35 U.S.C. 103, in response to the Applicant’s arguments on pages 15-16 of the Remarks that, “Merjanian does not qualify as prior art under 35 U.S.C. § 102(b)(2) and 102(c)” (Remarks, page 15, para 6), the Examiner respectfully submits that 35 U.S.C. § 102(b)(2) exception only applies to prior art applied under 35 U.S.C. § 102(a)(2); however, Merjanian qualifies as prior art under 35 U.S.C. § 102(a)(1), causing only 35 U.S.C. § 102(b)(1) to apply when reviewing whether a reference may not qualify as prior art. As formally cited below, 35 U.S.C. § 102(b)(1) states a reference does not qualify as prior art if the disclosure (i.e., the reference) was disclosed one (1) year or less before the effective filing date of the claimed invention (i.e., the instant application).
Specifically, the instant application, when utilizing the Applicant’s claim to Domestic Benefit, has an effective filing date of February 3, 2021, and Merjanian, the reference, was published under U.S. Patent Application # US20190373304A1 on December 5, 2019. Merjanian was published one (1) year, one (1) month, 29 days (excluding the end date) before the instant application was effectively filed; therefore, the exception under 35 U.S.C. § 102(b)(1) does not apply and the reference qualifies as prior art.
For ease of reference for the Applicant, the Examiner has provided 35 U.S.C. § 102(a)(1) and 35 U.S.C. § 102(b)(1) below (Emphasis added):
35 U.S.C. § 102(a)(1) – Novelty; Prior Art:
A person shall be entitled to a patent unless- (1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention;
35 U.S.C. § 102(b)(1) – Exceptions:
Disclosures made 1 year or less before the effective filing date of the claimed invention.-A disclosure made 1 year or less before the effective filing date of a claimed invention shall not be prior art to the claimed invention under subsection (a)(1) if-
(A) the disclosure was made by the inventor or joint inventor or by another who obtained the subject matter disclosed directly or indirectly from the inventor or a joint inventor; or
(B) the subject matter disclosed had, before such disclosure, been publicly disclosed by the inventor or a joint inventor or another who obtained the subject matter disclosed directly or indirectly from the inventor or a joint inventor.
For further information on the rejection of the claims under 35 U.S.C. 103, see the corresponding section above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZACHARY JOSEPH POLLOCK whose telephone number is (703)756-5952. The examiner can normally be reached Monday-Friday 10:00am-8:00pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, XUAN THAI can be reached at (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Z.J.P./Examiner, Art Unit 3715
/XUAN M THAI/Supervisory Patent Examiner, Art Unit 3715