Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED OFFICE ACTION
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 2024-04-24 in compliance with the provisions of 37 CFR 1.97 has been considered by the examiner and made of record in the application file.
Claim Status
Claims 1-9 are pending in this application and are under examination in this Office Action. No claims have been allowed.
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION. —The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claim 8 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Regarding claim 8,
the claim recites “a reception device that receives a beam irradiated from a mobile with a beam” and later recites “within a range in which the light reception device is irradiated with the beam.” Claim 8 does not provide antecedent basis for “the light reception device,” and it is unclear whether “the light reception device” is intended to refer to the “reception device,” the “light receiver” of claims 1/4, or some other component.
Further, claim 8 repeatedly recites “a mobile” (“a control method of a mobile…”, and “a beam irradiated from a mobile…”) without clarifying whether the latter “mobile” is the same mobile that performs the claimed method or a different mobile. As a result, the scope of the claim is ambiguous regarding what device is irradiating the beam and what device is receiving the beam.
Accordingly, claim 8 is indefinite under 35 U.S.C. 112(b).
Claim Rejections – 35 U.S.C. § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for the obviousness rejections set forth in this Office Action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
As reiterated by the Supreme Court in KSR, and as set forth in MPEP 2141 (R-01.2024), II, the factual inquiries of Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), applied for establishing a background for determining obviousness under 35 U.S.C. §103, are summarized as follows:
Determining the scope and content of the prior art;
Ascertaining the differences between the prior art and the claims at issue;
Resolving the level of ordinary skill in the pertinent art; and
Considering objective evidence indicative of obviousness or non-obviousness, if present.
This application currently names joint inventors. In considering patentability of the claims, the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 C.F.R. § 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention.
Claims 1, 2 and 8 are rejected under 35 U.S.C. §103 as being unpatentable over Bodin et al. (US20100004798A1) in view of Murata et al. (US20120230701A1) and further in view of Romain et al. (US7602480B2) and DeVaul et al. (US8634974B2).
Claim 1
Bodin teaches that a “waypoint” is a position chosen as a destination for navigation of a route, i.e., a designated position “[0042] A waypoint is a position chosen as a destination for navigation of a route. A route has one or more waypoints. That is, a route is composed of waypoints, including at least one final waypoint, and one or more intermediate waypoints”
[Bodin, ¶ [0042]].
Bodin further teaches receiving a current position from a GPS receiver and calculating a new heading from the current position to the waypoint (moving direction to the designated position) “[0134] The method of FIG. 6 includes periodically repeating (610) the steps of receiving (602) in the remote-control device from the GPS receiver a current position of the UAV. and calculating (604) a new heading from the current position to the waypoint. The method of FIG. 6 also includes identifying (606) flight control instructions for flying the UAV on the new heading, and transmitting (608), from the remote-control device to the UAV, the flight control instructions for flying the UAV on the new heading. In this method, if Lon. Lat is taken as the current position, and Lon. Lat is taken as the waypoint position, then the new heading may be calculated generally as the inverse tangent of ((Lat-Lat) / (Lon Lon))” [Bodin, ¶ [0134]].
Bodin does not expressly teach an optical space communication device/transmission module. However, within analogous art, Murata teaches an optical space communication device/transmission module that scans a direction of a transmission beam and irradiates an optical transmission signal to an opposite communication device (optical transmitter irradiating a light receiver with a beam) “[0057] Each of the transmission modules 12-1 to 12-M scans a direction of a transmission beam and "irradiates' an optical transmission signal to the opposite communication device. Then, it is possible to transmit the optical signal toward the direction of 360 degrees by a plural of the trans mission modules 12-1 to 12-M. Moreover, each of the trans mission modules 12-1 to 12-M performs a transmission by one channel basically.” [Murata, ¶ [0057]].
Murata does not expressly teach predictive pointing/repositioning such that the optical terminal is repositioned. However, within analogous art, Romain teaches predictive pointing/repositioning such that the optical terminal is repositioned “to stare at that point just prior to the moving station… arriving there,” i.e., pre-adjusting pointing before the geometry changes due to motion “……. To Summarize, for re-acquiring a moving station 12 after the loss of tracking lock, a cubic spline interpolation is used to mathematically estimate the flight path 26 of the moving station 12 between a set of known waypoints 28. The resultant interpolation provides a series of data points/curves representing estimated longitude/latitude as a function of time. See FIGS. 5 and 6. Conceptually, the next step is to determine a particular point (defined by altitude, longitude, and latitude) through which the moving station 12 will pass at a particular time, and to re-position the ground station optical terminal 22 to stare at that point just prior to the moving station 12 arriving there. To do so, a time interval t' is calculated, which corresponds to the time period between the current time and the point in time at which the moving station 12 last passed through a particular waypoint, e.g., the node 30, plus a factor A for repositioning the optical terminal. This is related to the interpolated flight path data in that the time-0 point on the……...” [Romain, Col.6].
Romain does not expressly teach predicting a location using a last-known location. However, within analogous art, DeVaul teaches predicting a location using a last-known location and a last-known motion vector and controlling a pointing mechanism to adjust a pointing axis of an optical-communication component to maintain an optical link “…...Determining a location of a first balloon, wherein the first balloon Comprises an optical-communication component that is configured to Communicate with a second balloon via a free-space optical link Determining a predicted location of the second balloon relative to the location of the first balloon based on a last known location and a last known motion vector of the second balloon Controlling a pointing mechanism to adjust a pointing axis of the optical-communication component in the first balloon based on the predicted location, to maintain the free-space optical link with the Second balloon……….” [DeVaul, Figure 7].
And “………In a second aspect, a method is provided. The method includes determining a location of a first balloon. The first balloon includes an optical-communication component that is configured to communicate with a second balloon via a free-space optical link. The method additionally includes determining a predicted location of the second balloon relative to the location of the first balloon based on a last known location and a last-known motion vector of the second balloon. The method also includes controlling a pointing mechanism to adjust a pointing axis of the optical-communication component in the first balloon based on the predicted location, to maintain the free-space optical link with the second balloon…...” [DeVaul, Col.1].
FSO links are highly sensitive to pointing error; even small angular deviations can cause large received-power loss and link drop. A POSITA would therefore be motivated to use readily-available navigation information (destination/waypoint and computed heading) as feed-forward inputs to the optical beam steering subsystem. Bodin provides instruction reception, current position acquisition, and heading computation to a waypoint, while Murata provides a steerable optical transmission beam. Romain and DeVaul teach predictive pointing and pre-repositioning based on future relative geometry and motion, directly addressing alignment loss during motion onset.
Combining these teachings is technically coherent and uses each reference for its established purpose: guidance computes direction; optical terminals steer beams; predictive pointing anticipates geometry change. Using the computed moving direction to initiate beam-direction change control before the mobile starts moving predictably improves acquisition probability, reduces dropouts during acceleration, and reduces reacquisition time. The combination does not change the principle of operation; it integrates known inputs into a known pointing loop with a reasonable expectation of success, consistent with KSR and the Graham factual inquiries
Accordingly, it would have been obvious to a person of ordinary skill in the art (POSITA) to combine Bodin’s waypoint-based movement instruction/heading determination with Murata’s steerable optical transmission beam and with Romain/DeVaul’s predictive pointing teachings because doing so yields predictable results: initiating beam direction change control using known motion direction and predicted relative location improves acquisition probability and maintains beam/receiver alignment before and during motion onset, reducing link dropouts and reacquisition time under KSR.
Claim 2
With respect to claim 2, all limitations of claim 1 are taught by Bodin, Murata, Romain, and DeVaul, except wherein claim 2 additionally requires that the movement controller moves the mobile toward the designated position after a predetermined delay time has elapsed since the change control is started.
Added “after a predetermined delay time has elapsed since the change control is started.”
Bodin does not expressly teach delaying commencement of movement after initiating beam re-pointing/change control. However, within analogous art, Romain expressly teaches using a time interval associated with repositioning the optical terminal and repeating the process after an appropriate wait interval, “………. At Step 110, the value for t' is referenced to the cubic spline interpolation of the waypoint data such as that shown in FIGS. 5 and 6, to obtain a corresponding latitude (p (t") and longitude ... (t"). At Step 112, the optical terminal 22 is pointed to “stare' at the azimuth and elevation corresponding to latitude p (t") and longitude (t"). Alternatively, the optical terminal 22 is pointed to stare at the next waypoint 28 after (ps (t") and W (t"). If the moving station 12 is not re-acquired by time t'+e, where e is an appropriate wait interval, as determined at Step 114, the process is repeated as back as Step 108. If the re-acquisition process fails after “Repetitions, as determined at Step 116, where N is user selected, the optical terminal is pointed to a selected “home' waypoint of the flight pattern 26 for waiting until reacquisition occurs, as at Step 118……….” [Romain, Col.5].
And “………To do so, a time interval t' is calculated, which corresponds to the time period between the current time and the point in time at which the moving station 12 last passed through a particular waypoint, e.g., the node 30, plus a factor A for repositioning the optical terminal. This is related to the interpolated flight path data in that the time-0 point on the interpolated data corresponds to the time point of the moving station 12 passing through the node 30. The value of t' is calculated based in part on GPS-sourced position data received from the moving station 12 prior to the tracking lock being lost. Once calculated, t' is cross-referenced too the interpolated flight path data, which provides an estimate of the moving station’s future location (longitude and latitude) at the end of time period t'. The ground station optical terminal 22 is then re-pointed to stare at a point in space corresponding to the altitude (a known, pre-designated value) and estimated longitude and latitude……...” [Romain, Col.6-7].
The delay-time requirement is a standard control refinement when a high-directivity beam is steered by a gimbal or scanning element. Steering mechanisms require finite time to reposition and settle; starting motion immediately after issuing a new pointing command can transiently increase pointing error and cause the receiver to fall outside the beam footprint.
Romain expressly teaches time intervals and an appropriate wait interval in the context of repositioning and re-acquisition. A POSITA would be motivated to introduce a predetermined delay after initiating change control so that the pointing axis can settle toward the new direction before motion begins, yielding the predictable benefit of improved link continuity and reduced risk of loss of lock at motion onset.
A POSITA would have been motivated to incorporate a predetermined delay after initiating beam re-pointing (change control) to allow the pointing mechanism to settle/reposition before movement begins, thereby predictably improving link continuity and reducing misalignment.
Claim 8
Bodin teaches that a “waypoint” is a position chosen as a destination for navigation of a route, i.e., a designated position “[0042] A waypoint is a position chosen as a destination for navigation of a route. A route has one or more waypoints. That is, a route is composed of waypoints, including at least one final waypoint, and one or more intermediate waypoints” [Bodin, ¶ [0042]].
Bodin further teaches that the UAV includes a communications adapter that facilitates receiving flight control instructions from a remote-control device, corresponding to receiving a movement instruction to move the mobile to a designated position “[0063] UAV (100) includes communications adapter (170) implementing data communications connections (184) to other computers (162), which may be wireless networks, satellites, remote control devices, servers, or others as will occur to those of skill in the art. Communications adapter (170) advantageously facilitates receiving flight control instructions from a remote-control device. Communications adapters implement the hardware level of data communications connections through which UAVs transmit wireless data communications. Examples of communications adapters include wireless modems for dial-up connections through wireless telephone networks” [Bodin, ¶ [0063]].
Additionally, Bodin teaches receiving from the GPS receiver a current position of the UAV and calculating a new heading from the current position to the waypoint, corresponding to acquiring a current position and acquiring a moving direction to the designated position “[0134] The method of FIG. 6 includes periodically repeating (610) the steps of receiving (602) in the remote-control device from the GPS receiver a current position of the UAV. and calculating (604) a new heading from the current position to the waypoint. The method of FIG. 6 also includes identifying (606) flight control instructions for flying the UAV on the new heading, and transmitting (608), from the remote-control device to the UAV, the flight control instructions for flying the UAV on the new heading. In this method, if Lon. Lat is taken as the current position, and Lon. Lat is taken as the waypoint position, then the new heading may be calculated generally as the inverse tangent of ((Lat-Lat) / (Lon Lon))” [Bodin, ¶ [0134]].
Bodin does not expressly teach scanning a direction of a transmission beam and irradiating an optical transmission signal However, within analogous art, Murata teaches scanning a direction of a transmission beam and irradiating an optical transmission signal to an opposite communication device, corresponding to changing an irradiation direction of a beam while irradiating the reception device “[0057] Each of the transmission modules 12-1 to 12-M scans a direction of a transmission beam and "irradiates' an optical transmission signal to the opposite communication device. Then, it is possible to transmit the optical signal toward the direction of 360 degrees by a plural of the trans mission modules 12-1 to 12-M. Moreover, each of the trans mission modules 12-1 to 12-M performs a transmission by one channel basically” [Murata, ¶ [0057]].
Murata does not expressly teach predictive pointing/repositioning. However, within analogous art, Romain teaches predictive pointing/repositioning such that the optical terminal is repositioned, corresponding to performing change control before the mobile starts to move toward the designated position “……. To Summarize, for re-acquiring a moving station 12 after the loss of tracking lock, a cubic spline interpolation is used to mathematically estimate the flight path 26 of the moving station 12 between a set of known waypoints 28. The resultant interpolation provides a series of data points/curves representing estimated longitude/latitude as a function of time. See FIGS. 5 and 6. Conceptually, the next step is to determine a particular point (defined by altitude, longitude, and latitude) through which the moving station 12 will pass at a particular time, and to re-position the ground station optical terminal 22 to stare at that point just prior to the moving station 12 arriving there. To do so, a time interval t' is calculated, which corresponds to the time period between the current time and the point in time at which the moving station 12 last passed through a particular waypoint, e.g., the node 30, plus a factor A for repositioning the optical terminal. This is related to the interpolated flight path data in that the time-0 point……” [Romain, Col.6].
Romain does not expressly teach predicting a location based on a last-known location. However, within analogous art, DeVaul teaches predicting a location based on a last-known location and a last-known motion vector and controlling a pointing mechanism to adjust a pointing axis of an optical-communication component, corresponding to performing irradiation-direction change control according to the moving direction/predicted motion to maintain irradiation of the receiver “…...Determining a location of a first balloon, wherein the first balloon Comprises an optical-communication component that is configured to Communicate with a second balloon via a free-space optical link Determining a predicted location of the second balloon relative to the location of the first balloon based on a last known location and a last known motion vector of the second balloon Controlling a pointing mechanism to adjust a pointing axis of the optical-communication component in the first balloon based on the predicted location, to maintain the free-space optical link with the Second balloon……….” [DeVaul, Figure 7]. And “………In a second aspect, a method is provided. The method includes determining a location of a first balloon. The first balloon includes an optical-communication component that is configured to communicate with a second balloon via a free-space optical link. The method additionally includes determining a predicted location of the second balloon relative to the location of the first balloon based on a last known location and a last-known motion vector of the second balloon. The method also includes controlling a pointing mechanism to adjust a pointing axis of the optical-communication component in the first balloon based on the predicted location, to maintain the free-space optical link with the second balloon…...” [DeVaul, Col.1].
Claim 8 is the method expression of the same operational sequence used in the system claims: receive the movement instruction, determine current position, compute direction to the designated position, and initiate beam-direction change control before movement begins. A POSITA would implement these steps because they reflect the natural ordering of guidance-and-pointing; the heading to the waypoint is computed before actuation, making it available for pre-pointing.
Murata provides the steerable beam transmission, Bodin provides instruction reception and heading computation, and Romain/DeVaul provide pre-repositioning/predictive pointing concepts. Expressing the combined system behavior as a method yields the same predictable improvement—reduced misalignment at motion onset and improved link continuity with a reasonable expectation of success under KSR.
Accordingly, it would have been obvious to a POSITA to perform the method steps of claim 8 by using Bodin’s received flight control instructions and computed heading to a waypoint (designated position) as inputs to Murata’s steerable beam transmission and to Romain/DeVaul’s predictive pointing/repositioning control, because doing so applies known techniques according to their established functions to reduce loss of alignment and improve link continuity during motion onset (KSR).
Claims 3 and 9 are rejected under 35 U.S.C. §103 as being unpatentable over Bodin et al. in view of Murata et al. and further in view of Romain et al. and DeVaul et al. and further in view of Lesh et al. (US5517016), Fink (US9057604B1) and McGarry (NASA Goddard Space Flight Center).
Claim 3
With respect to claim 3, all limitations of claim 1 are taught by Bodin, Murata, Romain, and DeVaul, except wherein claim 3 additionally requires that the beam controller changes an irradiation direction in a direction opposite to the moving direction within a range in which the light receiver is irradiated with a beam. Added “in a direction opposite to the moving direction…”.
Romain teaches pre-pointing/repositioning before arrival and DeVaul teaches predictive pointing based on predicted location and motion vector, which support the base claim-1 limitations referenced in claim 3.
Bodin, Romain and DeVaul do not expressly teach the specific “opposite direction / point-behind” wording. However, within analogous optical pointing/tracking art, Lesh expressly teaches that conventional lasercom designs require a separate beam steering mirror “to provide the point-ahead angle in order to compensate for the relative motion between transmit and receive systems,” and that the desired transmit direction is defined by beacon direction and a relative velocity vector between platforms, “……. The conventional design approach is to sense the beacon line-of-sight jitter using a high-speed tracking detector and to control saidjitter using a high-bandwidth steering mirror. This design approach does indeed stabilize the beacon line-of-sight, but unfortunately requires a separate beam steering mirror to provide the point-ahead angle in order to compensate for the relative motion between transmit and receive systems. Furthermore, a wide field-of-view acquisition detector is generally required to permit initial signal acquisition. The complexity of this conventional design approach has led to higher development costs for lasercom systems………” [Lesh, Col.2].
“……. In order to achieve the desired pointing accuracy, an auxiliary pointing sensor and a beam steering mechanism to compensate for platform vibration must be integral parts of the lasercorn instrument design. Sensing of pointing erroris typically accomplished with the aid of a beacon signal from the receiving site. The beacon signal defines a directional reference from which any deviation produced by the plat form disturbance can be referenced. This beacon direction and the relative velocity vector between the transmit and receive platforms define the desired direction to transmit the downlink signal. By sensing the deviation from this desired pointing angle and feeding back the error signal to the beam steering elements, the lasercom system can stabilize the pointing even if the platform jitter is several times larger
than the required pointing accuracy…….” [Lesh, Col.3].
Lesh does not expressly teach a point-ahead laser pointer-tracker system. However, within analogous art, Fink further expressly teaches a point-ahead laser pointer-tracker system in which “the transmitted beam direction is driven by a servo loop …” to steer the transmitted beam to a point-ahead angle (a deliberate offset between receive direction and transmit direction), “………...Improved point-ahead laser pointer-tracker systems are provided by this invention. In a first embodiment of this invention employing a shared aperture, energy returned from the current aimpoint on the target is used in computing the point-ahead angle. The trans mitted beam direction is driven by a servo loop in response to the angular difference between the locations of the images of the current aimpoint and the desired aimpoint on the target. The pointing servo loop performing this computation steers the transmitted laser beam to the correct point-ahead angle without any loop corrections or offsets in the loop. One implementation of this first embodiment uses the reflected energy of the transmitted laser beam to sense the current aimpoint. The second implementation of this first embodiment uses the reflected energy of a designator laser beam aligned to the transmitted laser beam to compute the point-ahead angle…….” [Fink, Col.1].
Additionally, within analogous art, McGarry expressly teaches point-ahead versus point-behind operation, including that the telescope is pointed behind to receive returns, evidencing an intentional opposite-direction bias (lead/lag) in pointing control “………Due to the narrow beam divergence, the transmit beam will have to be pointed ahead of the satellite’s location at the actual time of fire. Similarly, the received returns will arrive from the satellite’s past location. While the differences in angle between point-ahead and point-behind are small, for SLR2000 they are a significant fraction of the beam divergence and receiver field of view. Thus, the system will have to be able to independently point the outgoing laser pulse ahead of the telescope; the telescope will be pointed behind, to receive the returns from the satellite. The point-ahead angles are dependent on the orbit. Table 1 gives a list for some representative satellites………” [McGarry, p.4, Table 1].
Lead/lag compensation is driven by fundamental propagation-delay and relative-motion effects: when the platform moves and the control loop has latency, the direction needed to maintain alignment can include a bias that is not identical to the instantaneous motion direction. In narrow-beam optical systems, such biasing is a known technique to preserve link margin under motion and delay.
McGarry describes point-ahead versus point-behind operation, including pointing the telescope behind to receive returns from a past location. A POSITA would be motivated to apply an analogous lead/lag (including point-behind) bias within the beam-direction change control to keep the receiver within the beam footprint during motion onset and latency, which predictably improves stability and reduces reacquisition events.
Accordingly, it would have been obvious to a POSITA to implement a compensating opposite-direction bias (point-behind / lead-lag) in the beam-direction change control loop to maintain irradiation of the receiver during motion onset and propagation delay, yielding predictable improvements in alignment stability (KSR).
Claim 9
With respect to claim 9, all limitations of claim 2 are taught by Bodin, Murata, Romain, and DeVaul, except wherein claim 9 additionally requires that the beam controller changes the irradiation direction in a direction opposite to the moving direction within a range in which the light receiver is irradiated with the beam. Added “opposite to the moving direction…”.
Romain teaches pre-pointing/repositioning before arrival, and DeVaul teaches predicting location using a motion vector and controlling a pointing mechanism, supporting the base claim-1/claim-2 limitations referenced in claim 9.
Bodin, Murata, Romain, and DeVaul do not expressly teach the “opposite-direction” irradiation change control. However, within analogous optical pointing art, Lesh teaches providing a point-ahead angle to compensate relative motion and defining transmit direction based on a relative velocity vector “……The conventional design approach is to sense the beacon line-of-sight jitter using a high-speed tracking detector and to control saidjitter using a high-bandwidth steering mirror. This design approach does indeed stabilize the beacon line-of-sight, but unfortunately requires a separate beam steering mirror to provide the point-ahead angle in order to compensate for the relative motion between transmit and receive systems. Furthermore, a wide field-of-view acquisition detector is generally required to permit initial signal acquisition. The complexity of this conventional design approach has led to higher development costs for lasercom systems……” [Lesh, Col.2].
However, within analogous art, Fink teaches driving transmitted beam direction by a servo loop to a point-ahead angle “………...Improved point-ahead laser pointer-tracker systems are provided by this invention. In a first embodiment of this invention employing a shared aperture, energy returned from the current aimpoint on the target is used in computing the point-ahead angle. The trans mitted beam direction is driven by a servo loop in response to the angular difference between the locations of the images of the current aimpoint and the desired aimpoint on the target. The pointing servo loop performing this computation steers the transmitted laser beam to the correct point-ahead angle without any loop corrections or offsets in the loop. One implementation of this first embodiment uses the reflected energy of the transmitted laser beam to sense the current aimpoint. The second implementation of this first embodiment uses the reflected energy of a designator laser beam aligned to the transmitted laser beam to compute the point-ahead angle…….” [Fink, Col.1].
Additionally, McGarry teaches point-behind operation “………Due to the narrow beam divergence, the transmit beam will have to be pointed ahead of the satellite’s location at the actual time of fire. Similarly, the received returns will arrive from the satellite’s past location. While the differences in angle between point-ahead and point-behind are small, for SLR2000 they are a significant fraction of the beam divergence and receiver field of view. Thus, the system will have to be able to independently point the outgoing laser pulse ahead of the telescope; the telescope will be pointed behind, to receive the returns from the satellite. The point-ahead angles are dependent on the orbit. Table 1 gives a list for some representative satellites………” [McGarry, p.4, Table 1].
Claim 9 combines the delay-timing of claim 2 with the lead/lag (opposite-direction) bias of claim 3. A POSITA would be motivated to use both techniques together because they address complementary sources of pointing error: (i) actuator/servo settling time and (ii) propagation/control-loop latency relative to platform motion.
Using a predetermined delay allows the pointing mechanism to stabilize, while applying lead/lag bias maintains alignment as geometry begins to change. The combined effect is a predictable increase in link robustness for narrow-beam optical communication during acceleration and early movement, reducing dropouts and reacquisition time.
Claims 4, 5 and 6 are rejected under 35 U.S.C. §103 as being unpatentable over Bodin et al. in view of Murata et al. and further in view of Romain et al. and DeVaul et al. and further in view of Nishimura et al. (JP2014220676A) and Cunningham et al. (US7609972B2).
Claim 4
Murata teaches receiving an optical beam from an opposite optical space communication device and focusing the received beam via a lens system, evidencing a reception device that receives a beam from the mobile “[0007] The optical space communication device provides a lens system receiving an optical beam transmitted from the opposite optical space communication device and focusing the optical beam on the focusing screen, a beam splitter dividing an optical beam passed through the lens system into two optical beams, an area sensor provided on the focusing screen on which one optical beam which is divided by the beam splitter focuses and a photo detector on the focusing screen is arranged on it two-dimensionally, a laser array provided on the focusing screen on which the other optical beam divided by the beam splitter focuses and a laser on the laser array is arranged two-dimensionally, and a control mechanism which detects which one of the photo detectors on the area sensor detects the optical beam, and selects the laser on the laser array which provides a arrangement coordinate corresponds to the arrangement coordinate of the photo detector detected the optical beam, as a laser which transmits an optical beam toward the opposite optical space communication device” [Murata, ¶ [0007]].
Murata does not expressly teach an optical space communication system including a mobile station and fixed station. However, within analogous art, Nishimura further expressly teaches an optical space communication system including a mobile station and fixed station, wherein the mobile station control unit determines the orientation direction of the first optical antenna on the basis of stored position information on the fixed station and a detected position of the mobile station “………. To increase a possibility that an optical antenna of a mobile station and an optical antenna of a fixed station that perform optical space communication are in a state of nearly facing each other even if the travel route of the mobile station is undefined. SOLUTION: An optical space communication system of the invention is made up of a fixed station and a mobile station. The mobile station includes: a first communication unit comprising a first optical antenna capable of adjusting its orientation direction; a second communication unit for transmitting information to the fixed station; and a mobile station control unit that determines the orientation direction of the first optical antenna on the basis of stored position information on the fixed station and a detected position of the mobile station, adjusts the orientation direction of the first optical antenna to the determined direction, and makes the second communication unit transmit mobile station information indicating the position of the mobile station to the fixed station. The fixed station includes: a third communication unit comprising a second optical antenna capable of adjusting its orientation direction; a fourth communication unit for receiving the mobile station information; and a fixed station control unit that determines the orientation direction of the second optical antenna on the basis of the mobile station information, and adjusts the orientation direction of the second optical antenna to the determined direction……….” [Nishimura, Abstract].
Cunningham does not expressly teach Nishimura’s fixed-station orientation logic, but Cunningham expressly teaches exchanging GPS position data via an RF link for initial pointing of the beacon, evidencing reception-side receipt of movement/position information for pointing control “A technique for acquiring and tracking terminals in a free space laser communication system involves exchanging beacon laser beams between the terminals to acquire and then
track the terminals such that data laser beams exchanged by the terminals for communication are steered based on feedback from detection of the beacon laser beams. The beacon laser beams used for acquisition have a greater beam divergence than those used for tracking. Gimbals provide coarse Steering Of the data laser beams and Steering mirrors Provide fine steering. GPS position data exchanged via an RF link can be used for initial pointing of the beacon laser beams for acquisition. The beacon laser beams can be chopped such that all terminals can use the same beacon Wavelength and are distinguished by using different chopping frequencies. By detecting a chopped signal, the position sensor detector can be AC coupled to reduce Sensitivity to Solar radiation and glint” [Cunningham, Abstract].
At the system level, coordinating both ends of an FSO link is a predictable design choice because acquisition and tracking depend on relative geometry. Nishimura teaches receiver-side orientation control based on position information and detected mobile position, and Cunningham teaches exchanging position information via an RF link to support initial pointing. These teachings motivate providing movement/trajectory information to the reception device so it can pre-orient its receiver.
Having the reception device receive the movement instruction (or equivalent trajectory data) and change its light-receiver orientation in anticipation of motion improves acquisition probability and reduces tracking lag at motion onset. This uses known techniques according to their established functions and yields predictable improvements in link continuity, consistent with KSR and Graham.
Accordingly, it would have been obvious to a POSITA to have the reception device receive movement instruction/trajectory information (or position information corresponding thereto) and change an orientation direction of the light receiver according to that information because doing so predictably improves acquisition probability and alignment stability of a high-directivity optical link (Nishimura; Cunningham) under KSR.
Claim 5
With respect to claim 5, all limitations of claim 4 are taught by Bodin, Murata, Romain, DeVaul, Nishimura and Cunningham, except wherein claim 5 additionally requires that the movement controller moves the mobile toward the designated position after a predetermined delay time has elapsed since the change control is started. Added “after a predetermined delay time has elapsed…”. Nishimura and Cunningham do not expressly teach this delay timing. However, within analogous art, Romain expressly teaches time intervals and an appropriate wait interval in the pointing/re-acquisition process “………. At Step 110, the value for t' is referenced to the cubic spline interpolation of the waypoint data such as that shown in FIGS. 5 and 6, to obtain a corresponding latitude (p (t") and longitude ... (t"). At Step 112, the optical terminal 22 is pointed to “stare' at the azimuth and elevation corresponding to latitude p (t") and longitude (t"). Alternatively, the optical terminal 22 is pointed to stare at the next waypoint 28 after (ps (t") and W (t"). If the moving station 12 is not re-acquired by time t'+e, where e is an appropriate wait interval, as determined at Step 114, the process is repeated as back as Step 108. If the re-acquisition process fails after “Repetitions, as determined at Step 116, where N is user selected, the optical terminal is pointed to a selected “home' waypoint of the flight pattern 26 for waiting until reacquisition occurs, as at Step 118……….” [Romain, Col.5].
And “………To do so, a time interval t' is calculated, which corresponds to the time period between the current time and the point in time at which the moving station 12 last passed through a particular waypoint, e.g., the node 30, plus a factor A for repositioning the optical terminal. This is related to the interpolated flight path data in that the time-0 point on the interpolated data corresponds to the time point of the moving station 12 passing through the node 30. The value of t' is calculated based in part on GPS-sourced position data received from the moving station 12 prior to the tracking lock being lost. Once calculated, t' is cross-referenced too the interpolated flight path data, which provides an estimate of the moving station’s future location (longitude and latitude) at the end of time period t'. The ground station optical terminal 22 is then re-pointed to stare at a point in space corresponding to the altitude (a known, pre-designated value) and estimated longitude and latitude……...” [Romain, Col.6-7].
The delay-time limitation in the system context is motivated by the same settling and coordination considerations as claim 2, and it is even more beneficial when both ends may reposition.
A coordinated delay reduces the chance that both terminals are simultaneously in transient pointing states during the start of motion.
Romain’s time interval and wait behavior evidences that timing is a known design parameter in predictive pointing and re-acquisition. A POSITA would incorporate a predetermined delay after initiating change control to ensure stable pointing before movement begins, predictably improving acquisition and link stability.
A POSITA would have been motivated to incorporate such delay to allow pointing mechanisms to settle prior to movement, yielding predictable improvement in link stability.
Claim 6
With respect to claim 6, all limitations of claim 4 are taught by Bodin, Murata, Romain, DeVaul, Nishimura and Cunningham, except wherein claim 6 additionally requires acquiring mobile position multiple times, measuring speed based on the multiple positions, and changing the orientation direction based on the speed. Added “measures speed… and changes… based on the speed.”
Murata and Cunningham do not expressly teach measuring the mobile’s speed specifically by comparing multiple acquired positions (i.e., multiple position samples) and then using that speed as an explicit input to change the receiver orientation. However, within analogous navigation and telemetry art, Nishimura expressly teaches detecting movement state of the mobile station including moving speed and moving direction, which may be used for control of optical antenna orientation, “……. A mobile station 100 shown in FIG. 1 includes a movement state detection unit 110, a fixed station information storage unit 120, a mobile station antenna unit 130, a mobile station optical transmission unit 140, a mobile station optical reception unit 150, and an information transmission unit. 160 and a mobile station controller 170. The movement state detection unit 110 is an example of a detection unit. The mobile station antenna unit 130 is an example of a first communication unit. The information transmission unit 160 is an example of a second communication unit. The moving state detection unit 110 includes a GPS module that receives a GPS (Global Positioning System) signal, an altitude sensor, an attitude sensor, an acceleration sensor, and the like. The latitude, longitude, altitude, moving speed, moving direction, the movement state of the mobile station100 such as the attitude angle and the angular acceleration is detected. The moving state detection unit 110 detects at least the position (latitude, longitude, altitude) of the mobile station 100. The movement state detection unit 110 outputs the detection result to the mobile station control unit 170. The fixed station information storage unit 120 stores position information (latitude, longitude, altitude) of at least one fixed station 200. In addition, the fixed station information storage unit 120 includes an individual identification name that can identify the fixed station 200, an individual identification number that can identify the fixed station 200, an IP (Internet Protocol) address of the fixed station 200, an affiliated company of the fixed station 200, Information such as the country to which the fixed station200 belongs may be stored. Such information can be used for determining whether or not communication with the fixed station 200 is possible. The fixed station information storage unit 120includes a storage device such as an HDD (Hard Disk Device) or an SSD (Solid State Drive). The mobile station antenna unit 130 includes an optical antenna 131 and a drive mechanism 132. The optical antenna 131 is a directional optical antenna that transmits and receives a light beam to and from the fixed station 200. The optical antenna 131 is an example of a first optical antenna……” [Nishimura, p.3].
Nishimura does not expressly teach periodically repeating receipt of the UAV’s current position from a GPS receiver. However, within analogous art, Bodin expressly teaches periodically repeating receipt of the UAV’s current position from a GPS receiver (multiple position acquisitions), from which speed is readily determined, thereby supporting measuring speed from multiple positions “0134. The method of FIG. 6 includes periodically repeating (610) the steps of receiving (602) in the remote-control device from the GPS receiver a current position of the UAV. and calculating (604) a new heading from the current position to the waypoint. The method of FIG. 6 also includes identifying (606) flight control instructions for flying the UAV on the new heading, and transmitting (608), from the remote-control device to the UAV, the flight control instructions for flying the UAV on the new heading. In this method, if Lon. Lat is taken as the current position, and Lon. Lat is taken as the waypoint position, then the new heading may be calculated generally as the inverse tangent of ((Lat-Lat) / (Lon Lon))” [Bodin, ¶ [0134]].
Speed is a core input to prediction in any tracking system. Nishimura explicitly refers to detecting movement state including moving speed and moving direction, which informs antenna orientation control.
A POSITA would also recognize that speed can be computed from successive position measurements and timestamps as a routine kinematics calculation.
Using speed to drive receiver orientation is a predictable improvement because it enables feed-forward compensation and better estimation of near-future relative geometry. In narrow-beam optical links, improved prediction directly reduces tracking error and received-power fluctuation, yielding predictable benefits in link stability.
A POSITA would have been motivated to use speed derived from multiple position samples (or directly sensed speed) to control receiver pointing because tracking response and pointing prediction depend on velocity, and incorporating speed yields predictable improvements in maintaining alignment for optical space communication.
Claim 7 is rejected under 35 U.S.C. §103 as being unpatentable over Bodin et al. in view of Murata et al., Romain et al., DeVaul et al., Nishimura et al. and Cunningham et al. and further in view of Wirth et al. (US7343099B2) and further in view of Coffey et al. (US6097481), Crawford et al. (US20130070239A1) and Segerstrom et al. (US20140049643A1) and further in view of Seok (KR20040055430A) and Reyes et al. (WO2017087320A1).
Claim 7
With respect to claim 7, all limitations of claim 4 are taught by Bodin, Murata, Romain, DeVaul, Nishimura and Cunningham, except wherein claim 7 additionally requires: (i) acquiring a light reception position of the beam, (ii) detecting a start of movement from the light reception position, (iii) acquiring a delay time from receipt of the movement instruction to detection of the start of movement, and (iv) notifying the mobile of the delay time.
Primary (Bodin/Murata/Romain/DeVaul/Nishimura/Cunningham) does not expressly teach determining motion onset from quadrant/beam-position detector output and then measuring and reporting an instruction-to-motion delay therefrom.
However, within analogous art, Cunningham teaches exchanging GPS position data via an RF link for initial pointing, evidencing a control/telemetry channel usable to notify the mobile “A technique for acquiring and tracking terminals in a free space laser communication system involves exchanging beacon laser beams between the terminals to acquire and then
track the terminals such that data laser beams exchanged by the terminals for communication are steered based on feedback from detection of the beacon laser beams. The beacon laser beams used for acquisition have a greater beam divergence than those used for tracking. Gimbals provide coarse Steering Of the data laser beams and Steering mirrors Provide fine steering. GPS position data exchanged via an RF link can be used for initial pointing of the beacon laser beams for acquisition. The beacon laser beams can be chopped such that all terminals can use the same beacon Wavelength and are distinguished by using different chopping frequencies. By detecting a chopped signal, the position sensor detector can be AC coupled to reduce Sensitivity to Solar radiation and glint” [Cunningham, Abstract].
Within analogous art, Wirth further teaches deriving target position error (Ex/Ey) from quad-cell detector signals (S1–S4), evidencing acquisition of beam reception position/error from detector output “……Initialize System Set offsets KKy, Measure S., Calculate Target, Position Error f, (S1-S2) + (S4 - S3), 1 (S1 + S2 + S3 + S4) + K., (S1-S4) + (S2-S3) / (S1 + S2 + S3 + S4) + Ky, Ex, Ey, Update Steering Mirror Drives D = k E., D = k Ey Terminate Track, Finished, Proportional Control, Transmitter…” [Wirth, Fig.4A].
Further, within analogous laser spot tracker art, Coffey teaches that a composite tracking error signal is applied to a gimbal drive mechanism and also teaches detecting consecutive zero crossings/sign changes, evidencing using detector-output behavior to detect tracking events/motion onset “A "bang-bang”, i.e. digital tracking System for a remote lasers designated target having position, rate and acceleration errors. The System is responsive to the Sign of the error Signal (up/down, right/left) in relation to boresight as opposed to the amplitude of the received signal output from an optical quadrature detector to determine the occurrence of two consecutive Zero crossings of boresight following a System gain change where there is a change in Sign for consecutive signal detections and as a result thereof reduces the Size or amplitude of the digital control Step which is utilized to determine the limit cycle of the oscillation of the tracker's optics. Additionally, an estimate of the Velocity error is determined from the number of Signal detections between Successive Zero crossings, i.e. from up to down, or vice versa, and left to right, or Vice versa and Summed with the control Step to provide a composite tracking error Signal which is applied to a gimbal drive mechanism controlling the tracker optics” [Coffey, Abstract].
Additionally, Crawford teaches a quadrant detector providing four signals and steering to a null position, and teaches a bang-bang method comparing opposite quadrants to generate corrections, evidencing motion/offset detection and correction from quadrant-detector output “[0108] The invention is generally directed to a Laser Spot Tracking System to Simultaneously Process Multiple Targets with Position and Code Data. Laser spot trackers have been used for many years to steer a weapon system onto target. Typically, a pulsed narrow beam laser illuminates the target the laser light is scattered from the target. The tracker or seeker lens collects the Some of the scattered light and condenses it into a spot. The tracker is steered until the spot is divided equally into four equal signals normally using a quad detector, the null position. In this position the tracking head boresight is pointed at the target. [0109] There are two ways used to process the signals. The "Bang-bang method compares opposite quadrants or directions and the spot is dithered around the null position as the comparators make a series of corrections. A typical "bang bang system is described…….” [Crawford, ¶ [0108], ¶ [0109]].
Crawford does not expressly teach low-latency tracking and motion detection using gyroscopes. However, within analogous art, Segerstrom teaches low-latency tracking and motion detection using gyroscopes/accelerometers in a gimbal subassembly and nulling such motion via a fast-steering mirror (FSM) “[0030] The purpose of the BSM is to attenuate any residual disturbances imparted upon the gimbal system. In addition to maintaining a highly stable laser beam, the laser is required to maintain tight pointing accuracy to properly mark (i.e., designate) targets. In one embodiment, the BSM comprises a low-latency target tracker apparatus, a beam steering device, and a gyro-feed forward control module (e.g., gimbal single axis control loop). Low-latency target tracker apparatuses of various implementations are well-known. In the context of embodiments of the present invention, a skilled person will appreciate that it is preferred to implement a target tracker apparatus having a maximum latency of about one frame per one-thirtieth of a second (i.e., a low-latency target tracker apparatus) and that the target tracker apparatus is capable of tracking targets with imaging provided by a EO camera and/ or SWIR camera and/or LWIR camera. An example of the gimbal single axis control loop (i.e., beam stabilization circuitry) is shown in FIG. 5. A Fast-Steering Mirror (FSM) is an
example of the beam steering device. FIG. 6 shows an embodiment of a gimbal payload having a compact FSM implementation. As disclosed above, a gimbal system comprises a gimbal (e.g., gimbal base, gimbal, azimuth structure, elevation structure) with associated components attached thereto (e.g., payload assets). [0031] The control system for the FSM can determine the desired mirror position using a number of sensing paths. Gyroscopes and accelerometers can be used to detect motion of the elevation subassembly. The FSM can null out that motion thereby fixing the laser beam spot location. A video imaging device (e.g., of the target tracker apparatus) can be used to monitor target movement with respect to the line of sight. The FSM can adjust the beam output to correspond to that motion thereby keeping the laser beam spot on the target. If the elevation axis angle is observed, for example with an encoder, and the pointing error due to elevation axis position has been mapped and recorded, the FSM can null that error. If an optical sensor (e.g., a position-sensing photodiode) is stimulated by a beam (e.g., a high-power laser or a smaller reference beam) that originated in the azimuth Subassembly and has been relayed by the FSM, errors in the majority of the optical system can be captured and driven to null. Furthermore, the FSM can include mirror position detectors that can provide feedback to the control system to ensure the mirror is correctly positioned” [Segerstrom, ¶ [0030], ¶ [0031]].
Segerstrom does not expressly teach explicit time-delay modeling/compensation. However, within analogous art, Seok teaches explicit time-delay modeling/compensation in an electro-optical tracker, including that total time delay is calculated “……Variable time delay T in the equation (3) indicates a total time delay amount, which is calculated simply as the sum of the image sensor sampling delay time (T.sub.s) and the video signal time delay (T.sub.d) of the processor 220. In addition, to indicate the estimated value of τ is a delay time T from the plant equation (4), the delay time in the amount of electro-optical tracker T because it is the nature of the system constant τ uses the T value as it is. In addition, since the gain estimate K. subs of the plant can know the actual gain K of the plant, it can be set to the same value as in Equation 5………”
Further, within analogous actuation-delay monitoring art, Reyes teaches identifying actuation delay from initiation of actuation until components move, and reporting/telemetry transmission of the measured condition “[0028] Using the acoustic signals generated by the valve 202 while opening or closing, the valve monitor 206 can identify static friction (stiction) in valve 202. Generally, stiction is friction between stationary components of the valve 202 that inhibits relative motion between the components when the valve 202 is actuated. For example, stiction between sealing surfaces may inhibit opening or closing of the valve 202. The valve monitors 206 may identify stiction in the valve 202 as delay from a point in time that valve actuation is initiated until the internal components of the valve 202 move. The valve monitors 206 can measure such actuation delay by monitoring operation of a solenoid valve of the pod 100, or other control signal (via a valve control system) indicating that the valve 202 is being actuated, and monitoring the acoustic signature of the valve 202. If the time delay between initiation of valve actuation and initial movement of the valve (as identified via the acoustic signals generated by movement within the valve 202) changes over time (most likely to increase) the change in time delay may be indicative of stiction in the valve 202. The valve monitors 206 may report detected stiction to an authority responsible for operation of the valve 202 to allow scheduling of maintenance………. [0036] The preprocessed signal generated by the signal preprocessing circuitry 602 is provided to the telemetry transceiver 604. The telemetry transceiver 604 transmits the preprocessed signal to the telemetry transceiver 606. The telemetry transceiver 606 receives the signal transmitted by the telemetry transceiver 604 and provides the received signal to the valve monitor 608 for further processing and use in characterization of the valve 202 as described herein [Reyes, ¶ [0028]], ¶ [0036]].
Claim 7’s delay-measurement-and-notification concept addresses a recognized practical issue: command-to-motion latency in tracking/servo systems.
In optical pointing, motion onset manifests as a change in the beam position/error signal on a position detector, and knowing the delay between instruction receipt and actual motion onset enables better scheduling of pre-pointing and reduces loss of lock.
A POSITA would be motivated to measure this delay because it improves feed-forward pointing and reduces reacquisition time in high-directivity optical links.
Wirth, Crawford and Coffey provide concrete detector-output processing that can evidence motion onset; Segerstrom and Seok confirm that latency and delay compensation are central to tracker performance; Reyes teaches measuring actuation delay from command to first movement and reporting it via telemetry; and Cunningham provides a suitable RF channel for exchanging such timing information.
The combination is technically coherent and yields predictable improvements with a reasonable expectation of success under KSR.
Accordingly, it would have been obvious to a POSITA to (i) acquire beam reception position/error from quad-cell/quadrant detector output (Wirth; Crawford), (ii) detect motion onset from changes in detector output (Coffey; Crawford), (iii) measure and compensate for command-to-motion delay in tracking systems Segerstrom, Seok and Reyes, and (iv) notify the mobile of the measured delay via the known control/telemetry link (Cunningham; Reyes telemetry), because these are known techniques applied according to their established functions yielding predictable improvements in tracking stability and link continuity (KSR).
It is noted that any citations to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the reference should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. See MPEP 2123.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Mohammed Abdelraheem, whose telephone number is (571) 272-0656. The examiner can normally be reached Monday–Thursday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO-supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Payne, can be reached at (571) 272-3024. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (in USA or Canada) or 571-272-1000.
/MOHAMMED ABDELRAHEEM/Examiner, Art Unit 2635
/DAVID C PAYNE/Supervisory Patent Examiner, Art Unit 2635