DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/19/2026 has been entered. Claims 1-18 remain pending in the application.
Response to Arguments
Applicant’s arguments, see Remarks filed 01/19/2026, regarding the rejection of the claims under 35 U.S.C. 103 in view of Connellan (US 2019/0113966 Al), modified by Denenberg et al (US 2023/0271322 A1), have been fully considered but are not persuasive. Applicant argues nowhere does Denenberg disclose or suggest that “active sensors of devices in other groups are controlled to operate without employing multiplexing when their fields of view do not overlap, and that the sensors operate at full performance when they do not interfere with each other” as recited by the amended limitations. Applicant argues Denenberg’s crosstalk mitigation or noninterference scheme multiplexes all sensors or cameras, even those non-interfering.
Examiner respectfully disagrees. Denenberg determines interference mitigation on a per-camera level, and expressly states that cameras whose illumination levels are non-interfering for other cameras need not be considered in a mitigation scheme ([0125]). Since the mitigation scheme is what implements the multiplexing ([0118-0121]), the implication is that non-interfering cameras are not included in the multiplexing. Therefore, Denenberg anticipates the recited limitations.
Applicant argues that both Connellan and Denenberg are silent with respect to sensor performance when not employing multiplexing and have no disclosure related to active sensors of devices operating at full performance when not employing multiplexing.
Examiner respectfully disagrees. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., the device operating at less than full performance while multiplexing and full performance while not multiplexing) are not adequately recited in the rejected claims. The performance level of the device while multiplexing is completely absent, and therefore under the broadest reasonable interpretation of the claims, the device could be performing at full or less than full capacity while multiplexing. There is no definition in the claim as to what “full performance” means, and the language implies that any form of not multiplexing is considered to be full performance, and any form of multiplexing is less than full performance. This assertion is contrary to the commonly excepted meaning of multiplexing in the industry. While multiplexing enhances system performance overall instead of individual device performance, just because devices are multiplexing does not mean they are inherently performing at reduced performance rates. Ideally dynamic multiplexing requires the devices perform proportionally to the amount of interference they cause within the system, so some devices perform at full capacity while still being mutliplexed. The broadest reasonable interpretation of the claims gives no definition for “full performance” except for the fact that it involves not using multiplexing. Therefore, the disclosure of Denenberg where non-overlapping devices that are permitted to operate without multiplexing ([0125]) is sufficient to anticipate the limitations as amended.
Applicant argues that Denenberg only discloses assigning the same timeslots to cameras far enough from each other such that they will not interfere, and Denenberg has no disclosure related to grouping and controlling devices dynamically as they move.
Examiner respectfully disagrees. Denenberg explicitly discloses controlling the cameras in a zone-level system that assigns an illumination wavelength and/or moderulation frequency and/or time slices to cameras “dynamically during operation so that each individual camera does not generate illumination that can be sensed by other cameras in the zone” at ([0113]). In this teaching, Denenberg expressly anticipates the recited and argued limitations.
For these reasons, the rejections, in view of Connellan modified by Denenberg, are maintained below.
Claim Rejections – 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 7-15, and 17 are rejected under 35 U.S.C.103 as being unpatentable by Connellan (US 2019/0113966 Al) in view of Denenberg et al (US 2023/0271322 A1), .
As per claim 1, Connellan teaches; A computer-implemented method comprising: tracking positions and orientations of a plurality of devices within a real-world environment, each device comprising at least one active sensor; (ConnelIan, [0007]; “Some embodiments of the invention include an input device configured for interfacing with a virtual reality (VR)/augmented reality(AR) environment, the input device including one or more processors; an internal measurement unit (IMU) controlled by the one or more processors and configured to measure an acceleration and velocity (e.g., linear or angular velocity) of the input device; and a plurality of sensors controlled by the one or more processors and dis posed on the input device, each of the plurality of sensors configured to detect emissions received from a plurality of remote emitters (e.g., transducers), and each of the plurality of sensors associated by the one or more processors with a different particular location on the input device.”)
classifying the plurality of devices into a plurality of groups, based on the positions and the orientations of the plurality of devices within the real-world environment. (ConnelIan, [0007]; "plurality of sensors controlled by the one or more processors and disposed on the input device, each of the plurality of sensors configured to detect emissions received from a plurality of remote emitters (e.g., transducers), and each of the plurality of sensors associated by the one or more processors with a different particular location on the input device. In some implementations, the one or more processors can be configured to: determine a time-of-flight (TOF) of the detected emissions, determine a first estimate of a position and orientation of the input device based on the TOF of a subset of the detected emissions and the particular locations of each of the plurality of sensors on the input device that are detecting the detected emissions")
and controlling the active sensors of the devices in the given group to operate by employing multiplexing (Connellan, [0184]; "ultrasonic transducers can be fired simultaneously or in a time-cascaded implementation, typically referred to as time multiplexing. In some embodiments, calculating a position and/or orientation of peripheral device (e.g., 6 DOF controller) may take into account a time of emission for each ultrasound wave when using a time multiplexed-based firing. In some cases, different firing patterns may be employed for better results (e.g., minimized acoustic collisions). For instance, a first order of fired transducers (e.g., ABC) may be fired in succession in a certain operating environment (e.g., a particular room with certain acoustic characteristics), followed by a second order of fired transducers (e.g., BCA) when the operating environment changes and causes increased inducement of error (e.g., increased echoes, collisions, etc.) in received position measurement data.")
Connellan does not expressly teach: wherein devices of a given group have active sensors that are determined to interfere with each other when their fields of view overlap, and selectively controlling the active sensors of the devices in the given group so that the active sensors with overlapping fields of view are controlled to operate by employing multiplexing, while active sensors of devices in other groups are controlled to operate without employing multiplexing when their fields of view do not overlap, wherein the classifying of the plurality of devices into the plurality of groups and the selectively controlling of the active sensors are performed dynamically based on the positions and orientations of the plurality of devices as they move within the real world environment.
However, Denenberg in the same field of endeavor teaches: wherein devices of a given group have active sensors that are determined to interfere with each other when their fields of view overlap (Denenberg, [0025], “crosstalk mitigation among sensors or cameras by computationally defining a noninterference scheme that respects the independent monitoring and operation of each workcell. The scheme may involve communication between adjacent cells to adjudicate non-interfering sensor operation or system-wide mapping of interference risks and mitigation thereof. Mitigation strategies can involve time-division and/or frequency-division multiplexing or other forms of frequency modification such as spread spectrum or chirping.”), and
selectively controlling the active sensors of the devices in the given group so that the active sensors with overlapping fields of view are controlled to operate by employing multiplexing, (Denenberg [0118-0119], “The local masters of all the grouped workcells coordinate to mitigate interference among the zone groups and, therefore, at the zone level as well. In centralized implementations, instead of having controllers responsible for adjacent workspace areas cooperate with each other individually, a central supervisory control system 615 oversees multiple sets of zone-specific cameras or even the entire workspace and receives data from all cameras under its supervision. Similar to interference mitigation for a single zone, interference mitigation between zones can be achieved using time-division or frequency-division multiplexing (or a combination of both), where a control system assigns illumination wavelengths, modulation frequencies or time slices to zones and cameras during the startup or configuration phase. Frequency-division multiplexing can be enabled by changing either or both of the illumination wavelength or the modulation frequency. Other approaches involve static or dynamic interference maps that can be determined experimentally and used as inputs to the cameras to adjust for background interference”, [0121], “noninterference scheme is generally implemented on a camera level rather than a workcell level, since only some of the cameras of a particular workcell are likely to interfere with those of a neighboring workcell. If some of the cameras in a first workcell are far enough (e.g., 2-4 meters) from the nearest cameras in a neighboring workcell, it is possible to assign the same timeslots to non-interfering cameras of both workcells, since their simultaneous operation will not cause interference. The number of independent timeslots needed in a given configuration will depend on camera geometry and locations. Graph coloring algorithms, for example, can be used to determine the minimum number of timeslots among all workcells in a facility”),
while active sensors of other groups are controlled to operate without employing multiplexing when their fields of view do not overlap and operate at full performance when not employing multiplexing ([0122], noninterference scheme is generally implemented on a camera level rather than a workcell level, since only some of the cameras of a particular workcell are likely to interfere with those of a neighboring workcell. If some of the cameras in a first workcell are far enough (e.g., 2-4 meters) from the nearest cameras in a neighboring workcell, it is possible to assign the same timeslots to non-interfering cameras of both workcells, since their simultaneous operation will not cause interference, [0125], For any given camera, it is only interfering cameras or camera combinations that must be mitigated; cameras whose illumination levels are non-interfering for other cameras need not be considered in a mitigation scheme (~not considering them in the mitigation scheme means not multiplexing them with the other cameras))
wherein the classifying of the plurality of devices into the plurality of groups and the selectively controlling of the active sensors (Denenberg [0122], “noninterference scheme is generally implemented on a camera level rather than a workcell level, since only some of the cameras of a particular workcell are likely to interfere with those of a neighboring workcell. If some of the cameras in a first workcell are far enough (e.g., 2-4 meters) from the nearest cameras in a neighboring workcell, it is possible to assign the same timeslots to non-interfering cameras of both workcells, since their simultaneous operation will not cause interference. The number of independent timeslots needed in a given configuration will depend on camera geometry and locations. Graph coloring algorithms, for example, can be used to determine the minimum number of timeslots among all workcells in a facility”) are performed dynamically based on the positions and orientations of the plurality of devices as they move within the real world environment ([0113], all the cameras may be controlled by a zone-level control system, which triggers data capture sequentially so as to avoid interference or crosstalk among cameras. This can be achieved using, for example, time-division or frequency-division multiplexing (or both), where the zone-level control system assigns an illumination wavelength and/or modulation frequency and/or time slices to cameras during the startup or configuration phase or dynamically during operation so that each individual camera does not generate illumination that can be sensed by other cameras in the zone).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention to include, wherein devices of a given group have active sensors that are likely to interfere with each other, and selectively controlling the active sensors of the devices in the given group so that the active sensors likely to interfere with each other are controlled to operate by employing multiplexing, while active sensors of other groups are controlled to operate without employing multiplexing, in the method of ConnelIan as taught by Denenberg, in order to better mitigate cross-talk between sensors based on other devices in their proximity. (see Denenberg [0024])
As per claim 2, Connellan in view of Denenberg teaches; The computer-implemented method of claim 1.
Connellan further teaches: comprising: obtaining a three-dimensional environment model of the real-world environment in which the plurality of devices are present; (Connellan, [0037]; "As used herein, the term "rendered images" may include images that may be generated by a computer and displayed to a user as part of a virtual reality environment. The images may be dis played in two or three dimensions. Displays disclosed herein can present images of a real-world environment by, for example, enabling the user to directly view the real-world environment and/or present one or more images of a rea I-world environment (that can be captured by a camera, for example")
determining, from the three-dimensional environment model, positions of optical barriers present in the real-world environment; (Connellan, [0044]; "As used herein, the term "sensor system" may refer to a system operable to provide position information concerning input devices, peripherals, and other objects in a physical world that may include a body pa rt or other object. The term "tracking system" may refer to detecting movement of such objects. The body part may include an arm, leg, torso, or subset thereof including a hand or digit (finger or thumb). The body part may include the head of a user. The sensor system may provide position information from which a direction of gaze and/or field of view of a user can be determined. The object may include a peripheral I device interacting with the system. The sensor system may provide a real-time stream of position information. In an embodiment, an image stream can be provided, which may represent an avatar of a user. The sensor system and/or tracking system may include one or more of a: camera system; a magnetic field-based system; capacitive sensors; radar; acoustic; other suitable sensor configuration, optical, radio, magnetic, and inertia I technologies, such as lighthouses, ultrasonic, IR/LEDs, SLAM tracking, light detection and ranging (LI DAR) tracking, ultra-wideband tracking, and other suitable technologies as understood to one skilled in the art. The sensor system may be arranged on one or more of: a peripheral device, which may include a user interface device, the HMD; a computer (e.g., a P.C., system controller or like device); other device in communication with the system.")
and identifying a plurality of segments of the rea I-world environment that are optically separated from each other, based on the positions of the optical barriers, wherein devices in a given group are present in a corresponding segment of the real-world environment. (Connellan, [0207]; "In certain embodiments, a plurality of sensors (e.g., at least three) may be disposed on the peripheral device (e.g., ultrasound detectors 1232) and the location of each of the plurality of sensors with respect to the location on the peripheral device is known by the system (e.g., peripheral device and/or HMD.") (Connellan, [0036]; "As used herein, the term "real-world environment" or "real-world" may refer to the physical world. Hence, term "real-world arrangement" with respect to an object (e.g., a body pa rt or user interface device) may refer to an arrangement of the object in the real-world and may be relative to a reference point. The term "arrangement" with respect to an object may refer to a position (location and orientation). Position can be defined in terms of a global or local coordinate system.")
As per claim 3, Connellan in view of Denenberg teaches; The computer-implemented method of claim 2.
Connellan further teaches: detecting, based on said tracking, when a given device has moved from a first segment to a second segment; (Connellan, [0090]; “In such situations, tracking of the position and orientation of controller 508 relative to HMD 8 can switch to controller 508 and its sensors 514, 516. Control can switch back when the occlusion is removed. Alternately, in case of occlusions, the system does not switch from sensors located in the controllers to sensors located in the HMD. Instead, the system will switch from a set of sensors located in the controller (e.g., ultrasound) to another set of sensors (e.g., IMU) also located in the controller.”)
and re-classifying the given device by shifting the given device from a first group to a second group, wherein the first group and the second group correspond to the first segment and the second segment, respectively. (Connellan, [0096]; “In certain embodiments, ultrasound tracking may be lost due to occlusion, noise or otherwise. When the occlusion or other interference is removed, the tracked controller, totem or object may need to be reacquired. Using the switching of sensors, as described above, another sensor can provide current position information for the purpose of reacquisition. Alternately, the last motion vector can be used with the elapsed time since the last tracked position to estimate a new target position for reacquisition. Alternately, fusion can used to provide a smooth transition from one sensor set to another.”)
As per claim 4, Connellan in view of Denenberg teaches; The computer-implemented method of claim 1.
Connellan further teaches: detecting, based on the positions and the orientations of the plurality of devices, when fields of view of active sensors of at least two devices in a given group do not overlap; (Connellan, [0174]; “In further embodiments, dynamically switching between transmitters (e.g., transceivers mounted on an HMD) and/or microphones (e.g., mounted on a controller device) can facilitate the avoidance of echo overlap between received signals. For example, if there is significant echo overlap received on a controller device when the HMD/controller relationship is configured in a particular way, a different transducer positioned at a different location on the HMD may be used to change the distance or orientation between the source (e.g., adjacent transducer) and the destination (e.g., controller device) to help avoid the echo overlap condition. Echo overlap typically occurs when a microphone receives both a line-of-site signal and one or more echoes that are received at a time after the line-of-site signal less than the inverse of the bandwidth of the particular signal, which can significantly degrade the precision of a TOF measurement.”)
and controlling the active sensors of the at least two devices to operate without employing the multiplexing, when the fields of view of the active sensors of the at least two devices do not overlap. (Connellan, [0174]; “In further embodiments, dynamically switching between transmitters (e.g., transceivers mounted on an HMD) and/or microphones (e.g., mounted on a controller device) can facilitate the avoidance of echo overlap between received signals. For example, if there is significant echo overlap received on a controller device when the HMD/controller relationship is configured in a particular way, a different transducer positioned at a different location on the HMD may be used to change the distance or orientation between the source (e.g., adjacent transducer) and the destination (e.g., controller device) to help avoid the echo overlap condition. Echo overlap typically occurs when a microphone receives both a line-of-site signal and one or more echoes that are received at a time after the line-of-site signal less than the inverse of the bandwidth of the particular signal, which can significantly degrade the precision of a TOF measurement.”)
As per claim 5, Connellan in view of Denenberg teaches; The computer-implemented method of claim 4.
Connellan further teaches: detecting, based on the positions and the orientations of the plurality of devices, when the fields of view of the active sensors of the at least two devices in the given group do not overlap, but an angle between orientations of the active sensors of the at least two devices in the given group is smaller than a predefined threshold angle; (Connellan, [0159]; “In certain embodiments, both envelope detection and correlation detection may be employed simultaneously. In some cases, each method can be appropriately weighted such that envelope detection may receive more weighting (e.g., relative influence on the TOF calculation) during periods of detected movement greater thana threshold speed (e.g., >10 cm/s) and correlation detection may receive more weighting during detected periods of movement less than the threshold speed. Any suitable weighting algorithm may be employed, as would be understood by one of ordinary skill in the art with the benefit of this disclosure.”) (Connellan, [0008]; “The AR/VR environment can be defined by a Cartesian coordinate system, polar coordinate system, or any suitable coordinate system for tracking in three-dimensional (3D) space. In some cases, the detected emissions can be propagated according to a TDM scheme wherein each consecutively detected emission corresponds to a different transducer of the plurality of emitters, and where each consecutively detected emission detected by one or more of the plurality of sensors is individually used to update the first estimate. In other embodiments, the detected emissions can be propagated according to a CDM scheme where the one or more processors are configured to: determine a first code in the detected emissions having a highest amplitude; process the first code to update the first estimate; subtract the highest amplitude code from the detected emissions; determine a code in the detected emissions having a second highest amplitude; and process the second code to update the first estimate.”)
and controlling the active sensors of the at least two devices to operate by employing the multiplexing, when the fields of view of the active sensors of the at least two devices in the given group do not overlap, but the angle between the orientations of the active sensors of the at least two devices are smaller than the predefined threshold angle. (Connellan, [0184]; “In some systems, ultrasonic transducers can be fired simultaneously or in a time-cascaded implementation, typically referred to as time multiplexing. In some embodiments, calculating a position and/or orientation of peripheral device (e.g., 6 DOF controller) may take into account a time of emission for each ultrasound wave when using a time multiplexed-based firing.”) (Connellan, [0159] “In certain embodiments, both envelope detection and correlation detection may be employed simultaneously. In some cases, each method can be appropriately weighted such that envelope detection may receive more weighting (e.g., relative influence on the TOF calculation) during periods of detected movement greater than a threshold speed (e.g., >10 cm/s) and correlation detection may receive more weighting during detected periods of movement less than the threshold speed. Any suitable weighting algorithm may be employed, as would be understood by one of ordinary skill in the art with the benefit of this disclosure.”)
As per Claim 7, Connellan in view of Denenberg teaches; The computer-implemented method of claim1.
Connellan further teaches: monitoring radio packets transmitted by the plurality of devices; (Connellan, [0042]; “As used herein, the term “communication resources” may refer to hardware and/or firmware for electronic information transfer. Wireless communication resources may include hardware to transmit and receive signals by radio, and may include various protocol implementations e.g. 802.11 standards described in the Institute of Electronics Engineers (IEEE) and Bluetooth™ signal line, said modulation may accord to a serial protocol such as, for example, a Universal Serial Bus (USB) protocol, serial peripheral interface (SPI), inter-integrated circuit (12C), RS-232, RS-485, or other protocol implementations.”)
and for each device, determining at least one other device that’s in a proximity of said device, based on monitoring of at least one radio packet transmitted by the at least one other device, wherein the plurality of devices are classified into the plurality of groups based also on a determination of which devices are in a proximity of each other. (Connellan, [0064]; “As used herein, the term “user interface device” may include various devices to interface a user with a computer, examples of which include: pointing devices including those based on motion of a physical device, such as a mouse, trackball, joystick, keyboard, gamepad, steering wheel, paddle, yoke (control column for an aircraft) a directional pad, throttle quadrant, pedals, light gun, or button; pointing devices based on touching or being in proximity to a surface, such as a stylus, touchpad or touch screen; or a 3D motion controller. The user interface device may include one or more input elements. In certain embodiments, the user interface device may include devices intended to be worn by the user. Worn may refer to the user interface device supported by the user by means other than grasping of the hands.”) (Connellan, [0064]; “A transceiver 174 and antenna 175 allows the wireless downloading of programs, communication with other devices and communication with computer 4. Bluetooth, RF, or any other wireless communication technology may be used. A key switch array 176 detects key activations and provides the key data to microprocessor 168, which can then provide the key data as inputs to HMD/glasses 8 or a display 178.”)
As per Claim 8, Connellan in view of Denenberg teaches; The computer-implemented method of claim 7.
Connellan further teaches: wherein the radio packets comprise Bluetooth® advertising packets. (Connellan, [0042]; “As used herein, the term “communication resources” may refer to hardware and/or firmware for electronic information transfer. Wireless communication resources may include hardware to transmit and receive signals by radio, and may include various protocol implementations e.g. 802.11 standards described in the Institute of Electronics Engineers (IEEE) and Bluetooth™ signal line, said modulation may accord to a serial protocol such as, for example, a Universal Serial Bus (USB) protocol, serial peripheral interface (SPI), inter-integrated circuit (12C), RS-232, RS-485, or other protocol implementations.”)
As per Claim 9, Connellan in view of Denenberg teaches; The computer-implemented method of claim 7.
Connellan further teaches: wherein said monitoring of the radio packets comprises measuring a signal strength of a radio packet transmitted by a given device. (Connellan, [0064]; “A transceiver 174 and antenna 175 allows the wireless downloading of programs, communication with other devices and communication with computer 4. Bluetooth, RF, or any other wireless communication technology may be used. A key switch array 176 detects key activations and provides the key data to microprocessor 168, which can then provide the key data as inputs to HMD/glasses 8 or a display 178.”)
As per claim 10, Connellan in view of Denenberg teaches; The computer-implemented method of claim 1.
Connellan further teaches: wherein the multiplexing comprises at least one of: time-division multiplexing, wave length division multiplexing, space-division multiplexing. (Connellan, [0008] “In certain embodiments, the detected emissions can be propagated according to one of a time-division-multiplexing (TDM) scheme, frequency-division-multiplexing (FDM) scheme, or a code-division-multiplexing (CDM) scheme.”)
As per Claim 11, Connellan teaches; A system comprising at least one server that is communicably coupled to a plurality of devices, each device comprising at least one active sensor, wherein the at least one server is configured to: (Connellan, [0205]; “In a typical tracking system, at least three transponders (e.g., ultrasound transducers, optical emitters, etc.) are used to track a movement of a peripheral device 1520 (e.g., controller device, input device, 6 DOF controller, etc.) in 3D space to provide at least one signa I for each dimension (e.g., xyz). In some cases, occlusions may block one or more of the signals from the transponders such that the tracked peripheral device 1520 may not receive the full set of three signals. By way of example, FIG. 15 shows an object shown in a FOV 1530 but obscured by object 1540. In some conventional systems, a subset of the full set of signals (e.g., one or two signals) could not be used to determine a position of the peripheral device (e.g., controller 1200), and would thus be discarded until the next full set of signa Is arrived. However, aspects of the present invention can utilize the subset of the full set of signals to improve a tracking estimation of a peripheral device.”)
obtain information indicative of positions and orientations of the plurality of devices within a real-world environment; (Connellan, [0011]; “In further embodiments, a method for tracking a peripheral device in a VR/AR environment can include: detecting a location of a base device within a first coordinate system; receiving location data from a peripheral device; determining a location of the peripheral device relative to the base device based on the location data, the determined location determined within a second coordinate system; mapping the location of the peripheral device of the second coordinate system into a location within the first coordinate system; and causing the base device to display the peripheral device at its location relative to the base device in the VR/AR environment. In some cases, the location data may be provided by one or more ultrasonic emitters. The peripheral device can be a controller device that tracks movement in 6 degrees of freedom (DOF).”)
classifying the plurality of devices into a plurality of groups, based on the positions and the orientations of the plurality of devices within the real-world environment. (ConnelIan, [0007]; "plurality of sensors controlled by the one or more processors and disposed on the input device, each of the plurality of sensors configured to detect emissions received from a plurality of remote emitters (e.g., transducers), and each of the plurality of sensors associated by the one or more processors with a different particular location on the input device. In some implementations, the one or more processors can be configured to: determine a time-of-flight (TOF) of the detected emissions, determine a first estimate of a position and orientation of the input device based on the TOF of a subset of the detected emissions and the particular locations of each of the plurality of sensors on the input device that are detecting the detected emissions")
and send instructions to at least one of the devices in the given group to control the active sensors of the devices to operate by employing multiplexing. (ConnelIan, [0184]; "In some systems, ultrasonic transducers can be fired simultaneously or in a time-cascaded implementation, typically referred to as time multiplexing. In some embodiments, calculating a position and/or orientation of peripheral device (e.g., 6 DOF controller) may take into account a time of emission for each ultrasound wave when using a time multiplexed-based firing.")
Connellan does not expressly teach: wherein devices of a given group have active sensors that are determined to interfere with each other when their fields of view overlap, and selectively controlling the active sensors of the devices in the given group so that the active sensors with overlapping fields of view are controlled to operate by employing multiplexing, while active sensors of devices in other groups are controlled to operate without employing multiplexing when their fields of view do not overlap, wherein the classifying of the plurality of devices into the plurality of groups and the selectively controlling of the active sensors are performed dynamically based on the positions and orientations of the plurality of devices as they move within the real world environment.
However, Denenberg in the same field of endeavor teaches: wherein devices of a given group have active sensors that are determined to interfere with each other when their fields of view overlap (Denenberg, [0025], “crosstalk mitigation among sensors or cameras by computationally defining a noninterference scheme that respects the independent monitoring and operation of each workcell. The scheme may involve communication between adjacent cells to adjudicate non-interfering sensor operation or system-wide mapping of interference risks and mitigation thereof. Mitigation strategies can involve time-division and/or frequency-division multiplexing or other forms of frequency modification such as spread spectrum or chirping.”), and
selectively controlling the active sensors of the devices in the given group so that the active sensors with overlapping fields of view are controlled to operate by employing multiplexing (Denenberg [0118-0119], “The local masters of all the grouped workcells coordinate to mitigate interference among the zone groups and, therefore, at the zone level as well. In centralized implementations, instead of having controllers responsible for adjacent workspace areas cooperate with each other individually, a central supervisory control system 615 oversees multiple sets of zone-specific cameras or even the entire workspace and receives data from all cameras under its supervision. Similar to interference mitigation for a single zone, interference mitigation between zones can be achieved using time-division or frequency-division multiplexing (or a combination of both), where a control system assigns illumination wavelengths, modulation frequencies or time slices to zones and cameras during the startup or configuration phase. Frequency-division multiplexing can be enabled by changing either or both of the illumination wavelength or the modulation frequency. Other approaches involve static or dynamic interference maps that can be determined experimentally and used as inputs to the cameras to adjust for background interference”, [0121], “noninterference scheme is generally implemented on a camera level rather than a workcell level, since only some of the cameras of a particular workcell are likely to interfere with those of a neighboring workcell. If some of the cameras in a first workcell are far enough (e.g., 2-4 meters) from the nearest cameras in a neighboring workcell, it is possible to assign the same timeslots to non-interfering cameras of both workcells, since their simultaneous operation will not cause interference. The number of independent timeslots needed in a given configuration will depend on camera geometry and locations. Graph coloring algorithms, for example, can be used to determine the minimum number of timeslots among all workcells in a facility”),
while active sensors of other groups are controlled to operate without employing multiplexing when their fields of view do not overlap and operate at full performance when not employing multiplexing ([0122], noninterference scheme is generally implemented on a camera level rather than a workcell level, since only some of the cameras of a particular workcell are likely to interfere with those of a neighboring workcell. If some of the cameras in a first workcell are far enough (e.g., 2-4 meters) from the nearest cameras in a neighboring workcell, it is possible to assign the same timeslots to non-interfering cameras of both workcells, since their simultaneous operation will not cause interference, [0125], For any given camera, it is only interfering cameras or camera combinations that must be mitigated; cameras whose illumination levels are non-interfering for other cameras need not be considered in a mitigation scheme (~not considering them in the mitigation scheme means not multiplexing them with the other cameras))
wherein the classifying of the plurality of devices into the plurality of groups and the selectively controlling of the active sensors (Denenberg [0122], “noninterference scheme is generally implemented on a camera level rather than a workcell level, since only some of the cameras of a particular workcell are likely to interfere with those of a neighboring workcell. If some of the cameras in a first workcell are far enough (e.g., 2-4 meters) from the nearest cameras in a neighboring workcell, it is possible to assign the same timeslots to non-interfering cameras of both workcells, since their simultaneous operation will not cause interference. The number of independent timeslots needed in a given configuration will depend on camera geometry and locations. Graph coloring algorithms, for example, can be used to determine the minimum number of timeslots among all workcells in a facility”) are performed dynamically based on the positions and orientations of the plurality of devices as they move within the real world environment ([0113], all the cameras may be controlled by a zone-level control system, which triggers data capture sequentially so as to avoid interference or crosstalk among cameras. This can be achieved using, for example, time-division or frequency-division multiplexing (or both), where the zone-level control system assigns an illumination wavelength and/or modulation frequency and/or time slices to cameras during the startup or configuration phase or dynamically during operation so that each individual camera does not generate illumination that can be sensed by other cameras in the zone).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention to include, wherein devices of a given group have active sensors that are likely to interfere with each other, and selectively controlling the active sensors of the devices in the given group so that the active sensors likely to interfere with each other are controlled to operate by employing multiplexing, while active sensors of other groups are controlled to operate without employing multiplexing, in the method of ConnelIan as taught by Denenberg, in order to better mitigate cross-talk between sensors based on other devices in their proximity. (see Denenberg [0024])
As per Claim 12, Connellan in view of Denenberg teaches; the system-of claim 11.
Connellan further teaches: wherein the at least one server is configured to: obtain a three-dimensional environment model of the real-world environment in which the plurality of devices are present; (Connellan, [0008]; “The AR/VR environment can be defined by a Cartesian coordinate system, polar coordinate system, or any suitable coordinate system for tracking in three-dimensional (3D) space. In some cases, the detected emissions can be propagated according to a TDM scheme wherein each consecutively detected emission corresponds to a different transducer of the plurality of emitters, and where each consecutively detected emission detected by one or more of the plurality of sensors is individually used to update the first estimate. In other embodiments, the detected emissions can be propagated according to a CDM scheme where the one or more processors are configured to: determine a first code in the detected emissions having a highest amplitude; process the first code to update the first estimate; subtract the highest amplitude code from the detected emissions; determine a code in the detected emissions having a second highest amplitude; and process the second code to update the first estimate.”)
determine, from the three-dimensional environment model, positions of optical barriers present in the real-world environment; (Connellan, [0207]; In certain embodiments, a plurality of sensors (e.g., at least three) may be disposed on the peripheral device (e.g., ultrasound detectors 1232) and the location of each of the plurality of sensors with respect to the location on the peripheral device is known by the system (e.g., peripheral device and/or HMD). That is, the geometry of the peripheral device and the corresponding location of the sensors are mapped and correlated, and using the location of the transponders (e.g., ultrasound speakers on are mote lighthouse or HMD), the position and orientation of the peripheral device maybe determined. Typically, tracking may be possible when the system can reliably identify the source of the transmissions (e.g., the ultrasound transponders) and the location of the sensor with respect to the peripheral device. Because each sensor is correlated with a particular location on the peripheral device, the different detected TOFs can inform the orientation of the peripheral device, as would be appreciated by one of ordinary skill in the art with the benefit of this disclosure. Such systems typically have synchronized timing of send and receive patterns (e.g., performed via speakers and microphones, respectively, as described above).”)
and identify a plurality of segments of the real-world environment that are optically separated from each other, based on the positions of the optical barriers, wherein devices in a given group are present in a corresponding segment of the real-world environment. (Connellan, [0036]; “As used herein, the term “real-world environment” or “real-world” may refer to the physical world. Hence, term “real-world arrangement” with respect to an object (e.g., a body part or user interface device) may refer to an arrangement of the object in the real-world and may be relative to a reference point. The term “arrangement” with respect to an object may refer to a position (location and orientation). Position can be defined in terms of a global or local coordinate system.”)
As per Claim 13, Connellan in view of Denenberg teaches; the system of claim 12.
Connellan further teaches: wherein the at least one server is configured to: detect when a given device has moved from a first segment to a second segment; (Connellan, [0090]; “In such situations, tracking of the position and orientation of controller 508 relative to HMD 8 can switch to controller 508 and its sensors 514, 516. Control can switch back when the occlusion is removed. Alternately, in case of occlusions, the system does not switch from sensors located in the controllers to sensors located in the HMD. Instead, the system will switch from a set of sensors located in the controller (e.g., ultrasound) to another set of sensors (e.g., IMU) also located in the controller.”)
and re-classify the given device by shifting the given device from a first group to a second group, wherein the first group and the second group correspond to the first segment and the second segment, respectively. (Connellan, [0156]; “Accurately tracking a location and movement of a peripheral controller device (e.g., 6 DOF controller with one or more microphones) with respect to an HMD (e.g., with one or more ultrasonic transducers) can largely depend on an accuracy of a corresponding time-of-flight (TOF) measurement between the two entities. The TOF, as applied to certain embodiments, may correspond to the time it takes for an ultrasonic signal originating from a transmitter (e.g., transducer) from the HMD to reach a microphone on the peripheral device, which can correlate to a relative position. In some cases, undesirable effects such as interference, reflections, collisions, unfavorable environmental conditions, and the like, may negatively affect the accuracy of TOF measurements. Furthermore, some methods of TOF assessment may be better suited for periods of movement (e.g., envelope detection) versus periods of non-movement (e.g., correlation detection), as further described below.”)
As per Claim 14, Connellan in view of Denenberg teaches; the system of claim 11.
Connellan further teaches: wherein the at least one server is configured to: detect, based on the positions and the orientations of the plurality of devices, when fields of view of active sensors of at least two devices in a given group do not overlap; (Connellan, [0174]; “In further embodiments, dynamically switching between transmitters (e.g., transceivers mounted on an HMD) and/or microphones (e.g., mounted on a controller device) can facilitate the avoidance of echo overlaps between received signals. For example, if there is significant echo overlap received on a controller device when the HMD/controller relationship is configured in a particular way, a different transducer positioned at a different location on the HMD may be used to change the distance or orientation between the source (e.g., adjacent transducer) and the destination (e.g., controller device) to help avoid the echo overlap condition. Echo overlap typically occurs when a microphone receives both a line-of-site signal and one or more echoes that are received at a time after the line-of-site signal less than the inverse of the bandwidth of the particular signal, which can significantly degrade the precision of a TOF measurement.”)
and send instructions to at least one of the at least two devices to control the active sensors of the at least two devices to operate without employing the multiplexing, when the fields of view of the active sensors of the at least two devices do not overlap. (Connellan, [0174]; “In further embodiments, dynamically switching between transmitters (e.g., transceivers mounted on an HMD) and/or microphones (e.g., mounted on a controller device) can facilitate the avoidance of echo overlap between received signals. For example, if there is significant echo overlap received on a controller device when the HMD/controller relationships configured in a particular way, a different transducer positioned at a different location on the HMD may be used to change the distance or orientation between the source (e.g., adjacent transducer) and the destination (e.g., controller device) to help avoid the echo overlap condition. Echo overlap typically occurs when a microphone receives both a line-of-site signal and one or more echoes that are received at a time after the line-of-site signal less than the inverse of the bandwidth of the particular signal, which can significantly degrade the precision of a TOF measurement.”)
As per Claim 15, Connellan in view of Denenberg teaches; the system of claim 14.
Connellan further teaches: wherein the at least one server is configured to: detect, based on the positions and the orientations of the plurality of devices, when the fields of view of the active sensors of the at least two devices in the given group do not overlap, but an angle between orientations of the active sensors of the at least two devices in the given group is smaller than a predefined threshold angle; (Connellan, [0184]; “In some systems, ultrasonic transducers can be fired simultaneously or in a time-cascaded implementation, typically referred to as time multiplexing. In some embodiments, calculating a position and/or orientation of peripheral device (e.g., 6 DOF controller) may take into account a time of emission for each ultrasound wave when using a time multiplexed-based firing.”)
and send instructions to at least one of the at least two devices to control the active sensors of the at least two devices to operate by employing the multiplexing, when the fields of view of the active sensors of the at least two devices in the given group do not overlap, but the angle between the orientations of the active sensors of the at least two devices is smaller than the predefined threshold angle. (Connellan, [0159] “In certain embodiments, both envelope detection and correlation detection may be employed simultaneously. In some cases, each method can be appropriately weighted such that envelope detection may receive more weighting (e.g., relative influence on the TOF calculation) during periods of detected movement greater than a threshold speed (e.g., >10 cm/s) and correlation detection may receive more weighting during detected periods of movement less than the threshold speed. Any suitable weighting algorithm may be employed, as would be understood by one of ordinary skill in the art with the benefit of this disclosure.”)
As per claim 17, Connellan in view of Denenberg teaches; The system of claim11.
Connellan further teaches: wherein the multiplexing comprises at least one of: time-division multiplexing, wavelength-division multiplexing, space division multiplexing. (Connellan, [0184]; “In some systems, ultrasonic transducers can be fired simultaneously or in a time-cascaded implementation, typically referred to as time multiplexing. In some embodiments, calculating a position and/or orientation of peripheral device (e.g., 6 DOF controller) may take into account a time of emission for each ultrasound wave when using a time multiplexed-based firing.”)
Claims 6, 16 and 18 are rejected under 35 U.S.C.103 as being unpatentable by Connellan (US 2019/0113966 Al) in view of Denenberg et al (US 2023/0271322) and further in view of Mihailescu (US 20130237811 A1).
Regarding claim 6, Connellan in view of Denenberg teaches: the computer implemented method of claim 1.
The combination of Connellan and Denenberg do not expressly disclose: wherein in at least two devices in a given group, the at least one active sensor comprises a structured-light sensor, each of the at least two devices further comprising an active illuminator, the method further comprising: detecting, based on the positions and the orientations of the plurality of devices, when fields of view of structured light sensors of the at least two devices overlap; determining one of the at least two devices whose structured-light sensor has a larger field of view than a field of view of a structured-light sensor of another of the at least two devices; and when the fields of view of the structured-light sensors of the at least two devices overlap, controlling an active illuminator of the determined one of the at least two devices to project structured light, whilst switching off an active illuminator of the another of the at least two devices and controlling the structured-light sensors of the at least two devices to operate without employing the multiplexing.
However, Milhailescu in the same field of endeavor discloses: a system wherein in at least two devices in a given group, the at least one active sensor comprises a structured-light sensor (see Fig. 11c), each of the at least two devices further comprising an active illuminator, the method further comprising: (see [0081]; “In the case of a Lidar scanner, the emitted signal is a pulsed laser beam, and the receiver is a light sensor able to measure time-of-flight information by direct energy detection or phase sensitive measurements. In the case of a 3D flash lidar, the emitted signal is a pulsed laser beam illuminating the whole field of view (FOV), and the receiver is a specialized light sensing array able to measure time-of-flight information. The computing unit 104 will analyze the range data to determine the relative translation and rotation of a coordinate system 113 associated with the probe 101 in respect to an arbitrary coordinate system 114 associated with the adjacent environment or investigated objects.”),
detecting, based on the positions and the orientations of the plurality of devices, when fields of view of structured-light sensors of the at least two devices overlap; (see [0180]; “FIG. 11C shows a front view of the whole assembly showcasing two light sensors behind windows 1112 and 1119. In a time off light ranging camera implementation, one or more light sources 1109 can be combined with two time of flight light sensors behind the windows 1112 and 1119. In a structured light ranging camera implementation, a structured light source 1109 can be combined with two light sensors behind the windows 1112 and 1119 on either side of the structured light source to create a stereoscopic structured light camera. This arrangement will insure overlap in the field of view of the structured light source with the field of view of at least one light sensor”)
and determining one of the at least two devices whose structured-light sensor has a larger field of view than a field of view of a structured-light sensor of another of the at least two devices; (see [0102]; “Additionally, the light sensing system can comprise an assembly of two or more light sensing devices, such as a stereoscopic system made of at least two video cameras that have an overlapping field of view. One advantage of using an assembly of light sensing devices is an increased field of view.”)
and when the fields of view of the structured-light sensors of the at least two devices overlap, controlling an active illuminator of the determined one of the at least two devices to project structured light, whilst switching off an active illuminator of the another of the at least two devices and controlling the structured-light sensors of the at least two devices to operate without employing the multiplexing. (Mihailescu, [0081] “In the case of a structured light ranging camera, the emitted signal can be infrared (IR), visual or ultraviolet (UV) structured light or modulated light system, and the signal receiver is a IR, visual or UV light camera.” [0083] “Placing the source of the patterned light between two or more light cameras will ensure that the pattern projected by the source will be seen by at least one camera. Moreover, superior ranging precision can be obtained by using the stereoscopic-like information provided by any combination of multiple such cameras.” [0084] “For increased tracking performance, the ranging camera-based tracking system can be combined with other tracking systems, such as an inertial measurement unit (IMU), computer vision system, or ultrasound or electromagnetic ranging systems.”)
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention to include: wherein in at least two devices in a given group, the at least one active sensor comprises a structured-light sensor, each of the at least two devices further comprising an active illuminator, in the method of ConnelIan and Denenberg, as taught by Mihailescu. The motivation for doing so would resolve the issue of missing structured light sensor with the field of view as taught by Mihailescu [0102].
Regarding claim 16, Connellan in view of Denenberg teaches: the computer implemented method of claim 11.
The combination of Connellan and Denenberg do not expressly disclose: wherein in at least two devices in a given group, the at least one active sensor comprises a structured-light sensor, each of the at least two devices further comprising an active illuminator, wherein the at least one server is configured to: detect, based on the positions and the orientations of the plurality of devices, when fields of view of structured-light sensors of the at least two devices overlap; determine one of the at least two devices whose structured-light sensor has a larger field of view than a field of view of a structured-light sensor of another of the at least two devices; and when the fields of view of the structured-light sensors of the at least two devices overlap, send instructions to at least one of the at least two devices to control an active illuminator of the determined one of the at least two devices to project structured light, whilst switching off an active illuminator of the another of the at least two devices and controlling the structured-light sensors of the at least two devices to operate without employing the multiplexing.
However, Milhailescu in the same field of endeavor, discloses: a system wherein in at least two devices in a given group (see Fig. 11c), the at least one active sensor comprises a structured-light sensor, each of the at least two devices further comprising an active illuminator, wherein the at least one server is configured. (see [0081]; “the emitted signal is a pulsed laser beam illuminating the whole field of view (FOV), and the receiver is a specialized light sensing array able to measure time-of-flight information.”),
detect, based on the positions and the orientations of the plurality of devices, when fields of view of structured-light sensors of the at least two devices overlap; (see [0180]; “FIG. 11C shows a front view of the whole assembly showcasing two light sensors behind windows 1112 and 1119. In a time of flight ranging camera implementation, one or more light sources 1109 can be combined with two time of flight light sensors behind the windows 1112 and 1119. In a structured light ranging camera implementation, a structured light source 1109 can be combined with two light sensors behind the windows 1112 and 1119 on either side of the structured light source to create a stereoscopic structured light camera. This arrangement will insure overlap in the field of view of the structured light source with the field of view of at least one light sensor.”)
determine one of the at least two devices whose structured-light sensor has a larger field of view than a field of view of a structured-light sensor of another of the at least two devices; (see [0102]; “Additionally, the light sensing system can comprise an assembly of two or more light sensing devices, such as a stereoscopic system made of at least two video cameras that have an overlapping field of view. One advantage of using an assembly of light sensing devices is an increased field of view.”)
and when the fields of view of the structured-light sensors of the at least two devices overlap, controlling an active illuminator of the determined one of the at least two devices to project structured light, whilst switching off an active illuminator of the another of the at least two devices and controlling the structured-light sensors of the at least two devices to operate without employing the multiplexing. (Mihailescu, [0081] “In the case of a structured light ranging camera, the emitted signal can be infrared (IR), visual or ultraviolet (UV) structured light or modulated light system, and the signal receiver is a IR, visual or UV light camera.” [0083] “Placing the source of the patterned light between two or more light cameras will ensure that the pattern projected by the source will be seen by at least one camera. Moreover, superior ranging precision can be obtained by using the stereoscopic-like information provided by any combination of multiple such cameras.” [0084] “For increased tracking performance, the ranging camera-based tracking system can be combined with other tracking systems, such as an inertial measurement unit (IMU), computer vision system, or ultrasound or electromagnetic ranging systems.”)
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention to include: wherein in at least two devices in a given group, the at least one active sensor comprises a structured-light sensor, each of the at least two devices further comprising an active illuminator, in the method of ConnelIan and Denenberg, as taught by Mihailescu. The motivation for doing so would resolve the issue of missing structured light sensor with the field of view as taught by Mihailescu [0102].
Regarding claim 18, Connellan in view of Denenberg teaches: the computer implemented method of claim 11.
Connellan in view of Denenberg further discloses; the use of a light sensor system that includes
photodetectors and electronics for detecting IR light ([0119].
The combination of Connellan and Denenberg do not expressly disclose: wherein the instructions comprise at least one of: time slots in which an active illuminator of a given device is to project light, time slots in which an active sensor of the given device is to sense reflections of the light, a framerate at which the active illuminator of the given device is to project the light, a framerate at which the active sensor of the given device is to sense the reflections of the light, a wavelength of the light to be projected by the active illuminator of the given device, a wavelength of the light to be sensed by the active sensor of the given device, a pattern of the light to be projected by the active illuminator of the given device, a pattern of the light to be sensed by the active sensor of the given device, an area of a field of view on which the active illuminator of the given device is to project the light, an area of a field of view from where the active sensor is to sense the reflections of the light.
However, Milhailescu in the same field of endeavor, discloses a system for tracking and guiding sensors, wherein an area of a field of view on which the active illuminator of the given device is to project the light, (see [0081]; “the emitted signal is a pulsed laser beam illuminating the whole field of view (FOV), and the receiver is a specialized light sensing array able to measure time-of- flight information.”)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to combine Mihailescu teaching of tracking and guiding sensors that include a device being able to project light in the system of ConnelIan and Denenberg. The motivation for doing so would resolve an area of a field of view in which the active illuminator of the given device is to project the light for appropriate tracking objects at a specific location.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Rune (US 2021/0314910) discloses When some POs overlap (e.g., coincide) with SS Burst Sets, however, configuration of the paging transmission pattern (i.e., the PDCCH monitoring pattern) becomes a problem. For a PO overlapping with a SS Burst Set, the paging transmissions (PDCCH and/or PDSCH) should be frequency-multiplexed with the SSB transmissions to realize the above described benefits associated with POs overlapping/coinciding with SS Burst Sets ([0074]).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARGARET G WEBB whose telephone number is (571)270-7803. The examiner /can normally be reached M-F 9:00-6:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Appiah can be reached at (571) 272-7904. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARGARET G WEBB/ Primary Examiner, Art Unit 2641