DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “electronic, environmental module operable to electronically integrate” and “electronic, spatiotemporal reconstruction module to receive … and generate…” and “electronic, multi-dimensional spatiotemporal decision module operable to generate” and “electronic, image augmentation and analysis module operable to transform” in claim 6 and its progeny.
Here these modules do not necessarily invoke or describe any particular structure or class of structures to a person having ordinary skill in the art before the effective filing date of the invention. Furthermore, “electronic” does not imply or invoke or require any particular structure as the broadest reasonable interpretation of “electronic” in the context of the claims is that the module is “electronic” in some way where for example a software program can be considered electronic as it requires some type of electronic device to operate and requires no specific structure. Thus each element above comprises a generic placeholder followed by a transition term such as “operable to” and then a function without reciting sufficient structure, material or acts to entirely perform the recited function.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof where for example Paragraph 00028 of the 12/22/2022 Specification defines a module as “an electronic device or circuitry that may include at least one electronic processor and at least one electronic memory operable to store specialized electronic signals constituting "instructions" that when executed by the processor cause the module, engine or platform or an associated system, subsystem, device or method that the module, engine or platform is a part of to complete one or more specific features of a function or steps in a process.”
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Strunk1 in view of Lasnel et al2 (“Lasnel”).
Regarding claim 1, Strunk teaches a method for depicting a multi-dimensional, underwater environment of a telecommunications network (see Strunk, paragraph 0026 teaching “the surface vessel controller 38 may generate a virtual environment representative of the physical environment in which the underwater vehicle 12 is positioned. The controller 38 may generate the virtual environment based on stored sensor data (e.g., from LIDAR sensor(s), from sonar sensor(s), from camera(s), etc.), model(s) of subsea structure(s) (e.g., computer-aided design (CAD) model(s), etc.), sensor data from the sensor 58 of the surface vessel 14, data from the underwater vehicle camera 30, data from the underwater vehicle LIDAR system 32, other suitable source(s) of data, or a combination thereof” such that here a virtual environment is generated of an underwater environment that is multi-dimensional necessarily and the environment is of at least what may be considered a network based on paragraphs 0017-0018 and figure 1 teaching “the camera 30 is directed toward a subsea structure 34, such as production pipe(s), subsea wellhead(s), riser(s), pumping equipment, other suitable subsea structure(s) and/or undersea equipment, or a combination thereof” where the subsea structure can be seen to be at least some connection among points forming a network and for example could be “underwater conduits” as in paragraph 0004 describing the context of the method) comprising (see the below steps which address how such “depicting” occurs below where the limitations of the preamble are addressed by the limitations below):
electronically integrating collected data and metadata representing one or more images of the multi-dimensional, underwater environment of the telecommunications network (note that the manner of integrating the above data is not limited nor is the form of the output of the integrating and thus if data is integrated or otherwise combined in any manner then the limitation is met; see Strunk, paragraphs 0025-0026 and figure 1 teaching “controller 38 may generate a virtual environment representative of the physical environment in which the underwater vehicle 12 is positioned. The controller 38 may generate the virtual environment based on stored sensor data (e.g., from LIDAR sensor(s), from sonar sensor(s), from camera(s), etc.), model(s) of subsea structure(s) (e.g., computer-aided design (CAD) model(s), etc.), sensor data from the sensor 58 of the surface vessel 14, data from the underwater vehicle camera 30, data from the underwater vehicle LIDAR system 32, other suitable source(s) of data, or a combination thereof” and “in certain embodiments, the controller 38 may generate an initial virtual environment based on stored sensor data and/or model(s) of the subsea structure(s), and then the controller 38 may generate the virtual environment by updating the initial virtual environment based on sensor data from the surface vessel and/or the underwater vehicle” such that here “lidar” and “camera” data which are images are collected and integrated with metadata such as “model(s) of subsea structure(s)” such that they are integrated to from “a virtual environment representative of the physical environment”), wherein the network comprises at least underwater telecommunication cables and the underwater environment comprises an environment around at least the underwater telecommunication cables (see Strunk, paragraphs 0017-0018 as above teaching an underwater network comprises at least any “subsea structures” or “undersea equipment” and “the camera 30 is directed toward a subsea structure 34, such as production pipe(s), subsea wellhead(s), riser(s), pumping equipment, other suitable subsea structure(s) and/or undersea equipment, or a combination thereof” and “the camera is configured to output a physical image signal indicative of an image of the subsea structure 34 within the physical environment. However, the camera may also be directed to other elements within the physical environment, such as the seafloor, the surface vessel, another underwater or surface vehicle, another suitable element, or a combination thereof” such that the whole physical environment of the structures is captured and modeled and as in paragraph 0004 such undersea equipment could be “underwater conduits” and the data collected and integrated is of both the environment of the structures as well as the structures as it is “a virtual environment representative of the physical environment in which the underwater vehicle 12 is positioned” as in paragraph 0026);
receiving the integrated data and metadata and generating one or more image compositions of spatiotemporal changes to at least the environment around at least the underwater telecommunication cables over a time period (note that the manner of generating of image compositions is not limited by the claim nor exactly what constitutes an image composition and thus an image composition of an underwater environment over a time period that is generated would include any composition which would be any putting together or formation or construction of images and where such composition must in some way be a composition of the underwater environment over a time period such that for example image compositions are made to be displayed or generated or rendered over a time period such as in an image sequence or video or the like; see Strunk, paragraphs 0026-0027 and figure 1 teaching “the controller 38 may generate an initial virtual environment based on stored sensor data and/or model(s) of the subsea structure(s), and then the controller 38 may generate the virtual environment by updating the initial virtual environment based on sensor data from the surface vessel and/or the underwater vehicle” and as above the virtual environment is generated from the integrated data and metadata where “controller 38 may generate the virtual environment based on stored sensor data (e.g., from LIDAR sensor(s), from sonar sensor(s), from camera(s), etc.), model(s) of subsea structure(s) (e.g., computer-aided design (CAD) model(s), etc.), sensor data from the sensor 58 of the surface vessel 14, data from the underwater vehicle camera 30, data from the underwater vehicle LIDAR system 32, other suitable source(s) of data, or a combination thereof” and thus the images collected are composed with the other data to form the virtual environment taking into account spatiotemporal changes to the environment around the structures and the structures as this allows the “updating the initial virtual environment based on sensor data from the surface vessel and/or underwater vehicle” and image compositions of such changes are generated since “Once the virtual environment is generated, the controller 38 may output a virtual environment signal to the display 56 of the user interface 54 indicative of instructions to display a visual representation of the virtual environment. In response to receiving the virtual environment signal, the display may present the visual representation of the virtual environment” and “the visual representation of the virtual environment includes a three-dimensional visual representation of the virtual environment including a three-dimensional visual representation of the seafloor, a three-dimensional visual representation of the subsea structure, a three-dimensional visual representation of other element(s) within the physical environment, or a combination thereof. While a three-dimensional visual representation of the virtual environment is disclosed herein, in certain embodiments, the visual representation of the virtual environment may be two-dimensional. Furthermore, in certain embodiments, the visual representation of the virtual environment may be stereoscopic (e.g., in embodiments in which the user interface includes a stereoscopic display)” and note that this takes place over a time period corresponding to whenever the update takes place);
generating modified image information of at least the environment around the at least the underwater telecommunication cables (see Strunk, paragraphs 0026-0027 and figure 1 teaching “the controller 38 may generate an initial virtual environment based on stored sensor data and/or model(s) of the subsea structure(s), and then the controller 38 may generate the virtual environment by updating the initial virtual environment based on sensor data from the surface vessel and/or the underwater vehicle” such that here there is an initial representation and this is modified by updating to generate the modified image information of the updated virtual environment to send to a user for display); and
transforming the one or more images and the one or more image compositions into interactive visual depictions and generating augmented, visual depictions of at least the environment around at least the underwater telecommunication cables (see Strunk, paragraphs 0026-0030 and figures 1 and 2 where the images have been transformed to be displayed to a user such as seen in figure 1 where “Once the virtual environment is generated, the controller 38 may output a virtual environment signal to the display 56 of the user interface 54 indicative of instructions to display a visual representation of the virtual environment. In response to receiving the virtual environment signal, the display may present the visual representation of the virtual environment” and “In addition, the controller 38 may receive a control input signal indicative of a target virtual position and/or a target virtual orientation of a target virtual underwater vehicle within the virtual environment, and the controller may output a target virtual underwater vehicle signal to the display 56 of the user interface 54 indicative of instructions to display a visual representation of the target virtual underwater vehicle at the target virtual position and/or the target virtual orientation within the virtual environment. In response to receiving the target virtual underwater vehicle signal from the controller 38, the display may present the visual representation of the target virtual underwater vehicle at the target virtual position and/or the target virtual orientation within the virtual environment. In certain embodiments, the user interface 54 may be configured to output the control input signal based on input from the operator, and the controller 38 may receive the control signal input from the user interface 54. For example, the user interface may include a first hand controller configured to receive a position input from the operator, and the user interface may include a second hand controller configured to receive an orientation input from the operator. As the operator moves the first hand controller, the user interface 54 may output the control input signal to the controller 38 indicative of the target virtual position of the target virtual underwater vehicle, and the controller 38, in turn, may output the target virtual underwater vehicle signal to the display 56 of the user interface 54 indicative of instructions to display the visual representation of the target virtual underwater vehicle at the target virtual position. Accordingly, as the operator moves the first hand controller, the visual representation of the target virtual underwater vehicle may move in real-time or near real-time within the virtual environment.” And “Accordingly, as the operator moves the second hand controller, the visual representation of the target virtual underwater vehicle may move in real-time or near real-time within the virtual environment. Using the first hand controller and the second hand controller, the operator may position the target virtual underwater vehicle at any suitable location within the virtual environment and angle the target virtual underwater vehicle at any suitable orientation” such that here the images and image compositions have been transformed into an interactive visual depictions which the user can control in “real-time or near real-time” and an augmented visual depiction is generated as in paragraphs 0033-0039 and figure 2 where “controller 38 may then output a virtual environment signal to the display 56 of the user interface 54 indicative of instructions to display a visual representation 62 of the virtual environment. In response to receiving the virtual environment signal, the display 56 may present the visual representation 62 of the virtual environment. In the illustrated embodiment, the visual representation 62 of the virtual environment includes a three-dimensional visual representation 62 of the virtual environment including a three-dimensional visual representation 64 of the seafloor. In addition, the controller 38 may receive a control input signal indicative of a target virtual position and/or a target virtual orientation of a target virtual underwater vehicle within the virtual environment, and the controller may output a target virtual underwater vehicle signal to the display 56 of the user interface 54 indicative of instructions to display a visual representation 66 of the target virtual underwater vehicle at the target virtual position and/or the target virtual orientation within the virtual environment. In response to receiving the target virtual underwater vehicle signal from the controller 38, the display 56 may present the visual representation 66 of the target virtual underwater vehicle at the target virtual position and/or the target virtual orientation within the virtual environment”).
Strunk teaches all of the above but fails to specifically teach that the underwater network and underwater environment and subsea equipment and structure is an environment of at least underwater telecommunication cables and the underwater environment comprises an environment around at least the underwater telecommunication cables and the telecommunication cables telecommunications network. Rather Strunk teaches all of the functionality required by the claims but does so in the context of imaging underwater structures and subsea equipment in relation to oil and drilling exploration. Thus Strunk is found to contain a device/method/system which differs from the claimed device by the substitution of some component with other components.
In the same field of endeavor relating to imaging of underwater structures and subsea equipment, Lasnel teaches that it is known to image and model any “underwater conduit” including “telecommunications cables (electrical or optical fiber” where images of such a cable are captured along with its surrounding environment of a “seabed” (see Lasnel, paragraphs 0029-0034 teaching imaging of “telecommunications cables” for use in inspecting “underwater conduits” as explained in paragraphs 0003-0004). Thus Lasnel establishes that it is known in the art to use underwater imaging systems to inspect telecommunications cables in an underwater environment.
Therefore it would have been obvious for one of ordinary skill in the are before the effective filing date of the invention to modify Strunk with the known substitutable components of Lasnel as doing so would be no more than simple substitution of one known element for another to obtain predictable results. Here, one of ordinary skill in the art could have substituted the known type of underwater telecommunications cable of Lasnel for the underwater conduits of Strunk with the predictable result being that the telecommunications cables and surrounding environment of such cables would be captured using the visualization and exploration and visualization would be of a different subject, while all other functionality remains the same.
Regarding claim 2, Strunk as modified teaches all that is required as applied to claim 1 above and further teaches wherein the one or more images comprises one or more 3D images (see Strunk as modified with Strunk teaching at paragraph 0026 the images are 3D images as images from the camera represent 3D objects and “lidar sensor” data is 3D image data and furthermore “the display may present the visual representation of the virtual environment. In certain embodiments, the visual representation of the virtual environment includes a three-dimensional visual representation of the virtual environment including a three-dimensional visual representation of the seafloor, a three-dimensional visual representation of the subsea structure, a three-dimensional visual representation of other element(s) within the physical environment, or a combination thereof. While a three-dimensional visual representation of the virtual environment is disclosed herein, in certain embodiments, the visual representation of the virtual environment may be two-dimensional. Furthermore, in certain embodiments, the visual representation of the virtual environment may be stereoscopic (e.g., in embodiments in which the user interface includes a stereoscopic display)” such that here the image can comprise 3D images in multiple other ways as well).
Regarding claim 3, Strunk teaches all that is required as applied to claim 1 above and further teaches wherein the one or more image compositions comprise 4D image compositions (see Strunk, paragraph 0026 teaching “in certain embodiments, the controller 38 may generate an initial virtual environment based on stored sensor data and/or model(s) of the subsea structure(s), and then the controller 38 may generate the virtual environment by updating the initial virtual environment based on sensor data from the surface vessel and/or the underwater vehicle” such that here the update of the 3D environment over time generates such compositions of spatiotemporal changes making the compositions 4D as they incorporate the 3D imaging along with a time dimension).
Regarding claim 4, Strunk teaches all that is required as applied to claim 1 above and further teaches generating the modified image information using electronic representations of previously generated images of at least the environment around at least the underwater telecommunication cables and the telecommunication cables and refining the one or more images and one or more image compositions (see Strunk as modified with Strunk at paragraph 0026 teaching “in certain embodiments, the controller 38 may generate an initial virtual environment based on stored sensor data and/or model(s) of the subsea structure(s), and then the controller 38 may generate the virtual environment by updating the initial virtual environment based on sensor data from the surface vessel and/or the underwater vehicle” such that this update continuously is of previously generated images and the refining is the update of the visualization with the updated information over time).
Regarding claim 5, Strunk teaches all that is required as applied to claim 1 above and further teaches wherein the augmented, visual depictions of the multi-dimensional underwater environment comprise interactive, virtual reality (see Strunk as modified with Strunk teaching the user can control in “real-time or near real-time” the vehicle and thus the visualization and an augmented visual depiction is generated as in paragraphs 0033-0039 and figure 2 where “controller 38 may then output a virtual environment signal to the display 56 of the user interface 54 indicative of instructions to display a visual representation 62 of the virtual environment. In response to receiving the virtual environment signal, the display 56 may present the visual representation 62 of the virtual environment. In the illustrated embodiment, the visual representation 62 of the virtual environment includes a three-dimensional visual representation 62 of the virtual environment including a three-dimensional visual representation 64 of the seafloor. In addition, the controller 38 may receive a control input signal indicative of a target virtual position and/or a target virtual orientation of a target virtual underwater vehicle within the virtual environment, and the controller may output a target virtual underwater vehicle signal to the display 56 of the user interface 54 indicative of instructions to display a visual representation 66 of the target virtual underwater vehicle at the target virtual position and/or the target virtual orientation within the virtual environment. In response to receiving the target virtual underwater vehicle signal from the controller 38, the display 56 may present the visual representation 66 of the target virtual underwater vehicle at the target virtual position and/or the target virtual orientation within the virtual environment”).
Regarding claims 6-10, the instant claims recite a “system” version of the method of claims 1-5, respectively wherein modules carry out the functions as recited in claims 1-5, respectively. Strunk as modified teaches the method and teaches such a system (see Strunk, paragraphs 0014-0023 and figure 1 disclosing the computing system for carrying out the claim functions). In light of this, the limitations of claims 6-10 correspond to the limitations of claims 1-5, respectively; thus they are rejected on the same grounds as claims 1-5, respectively.
Regarding claims 11-15, the instant claims recite an even broader version of claims 1-5, respectively as they recite depicting a multi-dimensional environment comprising a similar series of steps with the only difference being that the environment is not required to be underwater. Of course the more narrow limitations as in claims 1-5 address the broader limitations in claims 11-15. In light of this, the limitations of claims 11-15 correspond to the limitations of claims 1-5, respectively; thus they are rejected on the same grounds as claims 1-5, respectively. Note that claim 11 further inserts “space-based” in front of environment such that the environment is “space-based” in some way. Here the environment has already been interpreted as space-based as it is based on the space being interacted with.
Response to Arguments
Applicant's arguments filed 1/8/2026 have been fully considered but they are not persuasive. Applicant argues on pages 6-7 of “REMARKS” that Strunk does not teach the aspect of the claim relating to “telecommunication network” and arguing that Strunk’s “conduits” are not part of a telecommunications network, and then alleges that based on this the subject matter of the claims would not be obvious based on the combined teachings of Strunk and Lasnel. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Applicant’s argument fails to consider the Lasnel secondary reference and its teachings that were brought in specifically to teach the missing limitations from Strunk. Thus as can be seen in Lasnel as in paragraphs 0003-0004 and 0030-0034 it is clearly known to image telecommunications cables in the same field of endeavor of imaging and modeling an environment for inspection as Strunk, which already teaches to model underwater environments and structures by imaging them. Thus as explained above it is the combination of Strunk and Lasnel that renders the claims obvious.
Applicant briefly alleges that claims 4 and 9 are allowable based on the same arguments above. Such arguments are not persuasive for the reasons explained above.
Applicant alleges that the “NFOPA does not appear to set forth those parts of the combined teachings that the Examiner is relying upon to reject the claims. Instead, the NFOA relies upon arguments that apply to different claims. This does not raise a prima facie case of obviousness.” The Examiner respectfully disagrees. As noted in the rejection of those claims, the limitations clearly correspond to the limitations of claims 1-5, except that they are broader. Furthermore the NFOA does not “rel[y] upon arguments that apply to different claims”, the NFOA relies on explanations and mappings of the prior art to the BRI of the claim language. The NFOA points out that the limitations have already been addressed and that the limitations correspond to those of other claims. Furthermore, the only distinction with respect to the claims being broadly “space-based” were addressed, thus the NFOA clearly establishes a prima facie case of obviousness. Applicant is welcome to point out which limitation in claims 11-15 is not addressed by the rejection of the same claim language with the same functionality. Furthermore, Applicant argues that “the combined teachings are related to underwater environments, not space-based environments.” However, the Examiner has already explained that an underwater imaging system capturing a spatiotemporal environment is operating in space, as a space is in its broadest reasonable interpretation is any defined extent in a set of dimensions, and for example any physical environment on Earth is a space, which would include underwater environments. Applicant appears to want a different, narrower and more limited interpretation of “space” that is not recited, perhaps wanting the claim to read “outer space” or some space outside of Earth’s atmosphere, however these are not claimed. Thus the underwater environment addressed in claim 1 with the combination of references and rationale is a space-based environment and as the limitations correspond as explained, the prima facie case for obviousness is the same. Thus Applicant’s arguments are not persuasive.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SCOTT E SONNERS whose telephone number is (571)270-7504. The examiner can normally be reached Mon-Friday 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SCOTT E SONNERS/Examiner, Art Unit 2613
/XIAO M WU/Supervisory Patent Examiner, Art Unit 2613
1 US PGPUB No. 20220363357
2 US PGPUB No. 20130182097