DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application is being examined under the pre-AIA first to invent provisions.
Claim Status
Claims 1, 6, and 7 are amended.
Claims 2 is canceled.
No newly added claims.
Claims 1, 3-9 are presented for examination.
Response to Arguments
Applicant’s arguments (see Remarks filed 12/23/2025) have been fully considered but applicant’s arguments are moot in view of new grounds of rejection.
Regarding applicant argument (Remark’s page 6-7), “there is no teaching, suggestion, or motivation to combine the references in the manner suggested by the Examiner, and further, the references would change the principal operation of the references and render the prior art unsatisfactory for its intended purpose. This is due to, at the time of the invention in ca. 2007-08, the industry recognized that transmitting a VGA signal and displaying the same over Bluetooth 2.0, as claimed by the present application, was not possible. The cited art of record fails to address this and other issues and therefore fails to render the present application, as claimed, as obvious. In attempting to solve this problem, it was initially explored to use Wi-Fi or IEEE 802.11 chipsets, however, a major battery pack would be required to power the same, which rendered the wearability aspect of the device impossible and further would not communicate with phones in 2007 since phones did not integrate Wi-Fi. Bluetooth had substantially different characteristics, reliability, and power levels as compared to Wi-Fi or IEEE 802.11. Therefore, in order to transmit any information wirelessly to a micro-display and create a viable product, Bluetooth was the best alternative to use at that time, assuming it could be made to work. Factors that were not contemplated or considered by the cited art of record based on the disclosures therein.”
In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007).
As noted by the PTAB decision (12 July 2022)
“The correct legal framework for obviousness does not involve divining the intention of an inventor of a prior art reference but, rather involves determining “what the combined teaching would have suggested to those of ordinary skill in the art”.
Microsoft Word – IPR2021-00417 FD – Ready (law360news.com)
Applicant arguments about Bluetooth 2.0 was regarded as unsuitable for such a use case. Bluetooth was being marketed for audio cell phone voice applications, it lacked a standard video profile or appropriate video encoding. Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references.
Applicant further argues (Remarks page 7-8) that, “At the time of the present invention, however, Bluetooth 2.0 was regarded as unsuitable for such a use case. Bluetooth was being marketed for audio cell phone voice applications, it lacked a standard video profile or appropriate video encoding, there were synchronization issues between master & slave devices in Bluetooth, there was an absence of tools like FFMPEG that could work with embedded devices, Wi-Fi was a technology more designed for multimedia, IP (Internet Protocol) over Bluetooth was done over RFCOMM except for PAN (Personal Area Networks) which was accomplished over BNEP (Bluetooth Network Ethernet Profile), and there were synchronization issues between video and audio.
However, the Applicant solved all these problems with a combination of hardware and software which, at the time of the invention, was not shown by the cited art of record to be possible. For example, the Applicant was able to utilize a counterintuitive design, in the master- slave relationship, by using the phone as the master and the controller as the slave. Such a relationship is not appreciated, disclosed, or otherwise made obvious by the cited art of record. It is also important to remember that at the time of the invention, Bluetooth 2.0 EDR required extensive modifications to accommodate video streaming as that particular technological sector had not evolved into what it is today.”
In response the examiner respectfully points out that rejection is combination or arts Zavracky, Kang and Bengtsson and Ishizuka in view of iXBT, while applicant is arguing against reference individually. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
The examiner respectfully further points out that, base art Zavracky provides Master slave relationship as disclosed in par. 0167 and 0173, the head-mounted device 1511 receives wireless signals from a host device. Zavracky was modified with teachings of using Bluetooth technology for transmission of visual data, taught by Kang and Bengtsson and further modified with teachings of Ishizuka and iXBT to teach two data channels to deliver audio video and other information and teaching ARM DSO processor for generating portable video. The examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007).
Applicant further argues (Remarks page 8) that, “the Applicant was successfully able to use Bluetooth for video streaming at different display resolutions, frame rates, and wireless transports, and such was not shown by the cited art of record, taken singularly or in combination with one another. In order to solve this problem, the Applicant created a file to maintain a ratio of the multiplexed video and audio frames to be properly synchronized. The separate audio and video files would then be played locally and as the streams were decoded into individual frames, the frames would be stored in temporary audio and video frame files. Then the ratio of audio frames to video frames would be calculated (AV Ratio) and the files would be muxed together into one file, mixing the audio and video frames while maintaining the AV Ratio to make sure that as the file was streamed, the video and audio buffers would stay synced in time.”
In response, examiner respectfully points out that above features that are essential to applicant’s claim are not being claimed in claim itself. Therefore applicant is arguing unclaimed feature. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1 and 3-9 are rejected on the ground of nonstatutory obviousness-type double patenting over claims 1-3 of Patent No. 10,474,418 in view of Zavracky Duda, Agrawal, Bengtsson. Although the claims at issue are not identical, they are not patentably distinct from each other because they claim same subject matter.
The following subject matter claimed in the instant application is fully disclosed in the patent No. US 10,474,418 and is covered by the patent since the patent and the application are claiming common subject matter, as follows:
Instant Application 15/802,908
U.S. Patent No. 10,474,418
Claims 1 and 3-9 maps to
Claim 1-3
Claim 1 of U.S. Patent, US 10,474,418 claims inventive steps same as of the inventive steps in claim 1 of instant application. However claim 1 of U.S. Patent, US 10,474,418 does not claim feature of, “wherein the near-eye wearable wireless transfer device has a display coupled to an arm, a processor, a memory, a wireless transceiver, an audio input, an audio output, and a speech recognition engine,
wherein the wireless transceiver employs a Bluetooth® physical layer with a Bluetooth® proxy to implement a packet switching gateway,
wherein the signal features auto sequencing and guaranteed packet delivery.”
Zavracky discloses, wherein the near-eye wearable wireless transfer device has a display coupled to an arm (Par. 0160, At least one display pod 1100 is mounted to the horizontal support 1630 (i.e. arm as shown in fig. 29A, 28)), a processor (Par. 1064, fig. 20, CPU 1712), a memory (Par. 0156, fig. 28, Personal Computer Memory Card International Association (PCMCIA) interface module 1554 coupled to the head band 1512. A PCMIA card 1558 is inserted into the PCMCIA interface module 1554), a wireless transceiver (Par. 0157, the communication module 1720 includes a wireless transducer for transmitting and receiving digital audio, video and data signals), an audio input (Attached to one of the earphones is a microphone arm 1690 having a microphone 1559), an audio output (Par. 0158, The earphones 1603a, 1603b).
Therefore, it would have been obvious to one of ordinary skill in the art to modify claim 1 of U.S. Patent, US 10,474,418 by adding feature of wherein the near-eye wearable wireless transfer device has a display coupled to an arm, a processor, a memory, a wireless transceiver, an audio input, an audio output, as taught by Zavracky, drive circuits of panel displays having the desired speed, resolution and size and providing for ease, and reduced cost of fabrication, as disclosed in Zavracky, par. 0008.
Claim 1 of U.S. Patent, US 10,474,418 in view of Zavracky does not disclose, “wherein the near-eye wearable device has a speech recognition engine,
wherein the wireless transceiver employs a Bluetooth® physical layer with a Bluetooth® proxy to implement a packet switching gateway,
wherein the signal features auto sequencing and guaranteed packet delivery.”
Duda discloses, wherein the near-eye wearable device has a speech recognition engine (Par. 0003-0004, wearable hands free solar powered cap/visor integrated communications and entertainment devices and more particularly to an apparatus that is practically invisible, applies voice recognition and heads up display technology, a head wearable cap or visor resulting in a compact, lightweight, integrated, hands free, manual or voice activated, heads-up (digital) displayed).
Therefore, it would have been obvious to one of ordinary skill in the art to modify Claim 1 of U.S. Patent, US 10,474,418 in view of Zavracky. Clear motivation exists to make this modification, which would enable the HMD of Zavracky to benefit from voice activated control of operation of displayed content such as scrolling/selecting functions, playing media setting volume and such (see Duda par. 0050).
Claim 1 of U.S. Patent, US 10,474,418 in view of Zavracky in further view of Duda does not disclose, “wherein the wireless transceiver employs a Bluetooth® physical layer with a Bluetooth® proxy to implement a packet switching gateway,
wherein the signal features auto sequencing and guaranteed packet delivery.”
Agrawal discloses, wherein the wireless transceiver employs a Bluetooth® physical layer with a Bluetooth® proxy to implement a packet switching gateway (Par. 0018, fig. 3, Bluetooth protocol stack. Par. 0027, fig. 3, a top radio access control protocol layer (RAC), which acts as a bridge between the two protocols, as well as a packet router. This layer receives and forwards transmission packets to and from the Bluetooth and IEEE 802.11 networks utilizing a single antenna. Fig. 3, par. 0018-0019 discloses baseband layer (i.e. baseband layer in Bluetooth protocol stack is part of the physical layer of OSI model) defines key procedures that enabled devices to communicate with each other using Bluetooth technology. Par. 0029, Devices which possess the one-chip, dual mode IEEE 802.11 and Bluetooth wireless radio interface chip are referred to as Bluetooth Wireless Gateways (BWGs), i.e. wireless communication is embellished using Bluetooth physical (i.e. baseband layer) layer with a Bluetooth bridge (i.e. proxy) to implement packet routing or packet switching gateway).
Therefore, it would have been obvious to one of ordinary skill in the art to modify Claim 1 of U.S. Patent, US 10,474,418 in view of Zavracky in further view of Duda with the teachings of the wireless transceiver employs a Bluetooth physical layer with a Bluetooth proxy to implement a packet switching gateway, as taught by Agrawal, to provide Bluetooth device access to outside world such as internet by combining Bluetooth with WLAN by packet switching between Bluetooth and WLAN networks, as disclosed in Agrawal, par. 00004.
Claim 1 of U.S. Patent, US 10,474,418 in view of Zavracky in further view of Duda in further view Agrawal does not disclose, “wherein the signal features auto sequencing and guaranteed packet delivery.”
Bengtsson discloses, Bluetooth connection wherein the signal features auto sequencing and guaranteed packet delivery (col. 13 lines 3-22 – RFCOMM features guaranteed QoS, a form of guaranteeing delivery of at least some of the packets. The L2CAP protocol, upon which RFCOMM is built, features auto sequencing as is known in the art, and as described in the Bluetooth Core Specification v1.0B).
Therefore, it would have been obvious to one of ordinary skill in the art to modify Claim 1 of U.S. Patent, US 10,474,418 in view of Zavracky in further view of Duda in further view of Agrawal with the teachings of Bengtsson, the rationale being to provide improved quality of delivery of the content.
Claim 1 is rejected on the ground of nonstatutory obviousness-type double patenting over claims 1 of Patent No. 10579324 in view of Zavracky Duda, and Agrawal. Although the claims at issue are not identical, they are not patentably distinct from each other because they claim same subject matter.
The following subject matter claimed in the instant application is fully disclosed in the patent No. US 10579324and is covered by the patent since the patent and the application are claiming common subject matter, as follows:
Instant Application 15/802,908
U.S. Patent No. 10579324
Claim 1 maps to
Claim 1
Claim 1 of U.S. Patent, US 10579324 claims inventive steps same as of the inventive steps in claim 1 of instant application. However claim 1 of U.S. Patent, US 10,474,418 does not claim feature of, “wherein the near-eye wearable wireless transfer device has a display coupled to an arm, a processor, a memory, a wireless transceiver, an audio input, an audio output, and a speech recognition engine,
wherein the wireless transceiver employs a Bluetooth® physical layer with a Bluetooth® proxy to implement a packet switching gateway.”
Zavracky discloses, wherein the near-eye wearable wireless transfer device has a display coupled to an arm (Par. 0160, At least one display pod 1100 is mounted to the horizontal support 1630 (i.e. arm as shown in fig. 29A, 28)), a processor (Par. 1064, fig. 20, CPU 1712), a memory (Par. 0156, fig. 28, Personal Computer Memory Card International Association (PCMCIA) interface module 1554 coupled to the head band 1512. A PCMIA card 1558 is inserted into the PCMCIA interface module 1554), a wireless transceiver (Par. 0157, the communication module 1720 includes a wireless transducer for transmitting and receiving digital audio, video and data signals), an audio input (Attached to one of the earphones is a microphone arm 1690 having a microphone 1559), an audio output (Par. 0158, The earphones 1603a, 1603b).
Therefore, it would have been obvious to one of ordinary skill in the art to modify claim 1 of U.S. Patent, US 10579324 by adding feature of wherein the near-eye wearable wireless transfer device has a display coupled to an arm, a processor, a memory, a wireless transceiver, an audio input, an audio output, as taught by Zavracky, drive circuits of panel displays having the desired speed, resolution and size and providing for ease, and reduced cost of fabrication, as disclosed in Zavracky, par. 0008.
Claim 1 of U.S. Patent, US 10579324 in view of Zavracky does not disclose, “wherein the near-eye wearable device has a speech recognition engine,
wherein the wireless transceiver employs a Bluetooth® physical layer with a Bluetooth® proxy to implement a packet switching gateway.”
Duda discloses, wherein the near-eye wearable device has a speech recognition engine (Par. 0003-0004, wearable hands free solar powered cap/visor integrated communications and entertainment devices and more particularly to an apparatus that is practically invisible, applies voice recognition and heads up display technology, a head wearable cap or visor resulting in a compact, lightweight, integrated, hands free, manual or voice activated, heads-up (digital) displayed).
Therefore, it would have been obvious to one of ordinary skill in the art to modify Claim 1 of U.S. Patent, US 10579324 in view of Zavracky. Clear motivation exists to make this modification, which would enable the HMD of Zavracky to benefit from voice activated control of operation of displayed content such as scrolling/selecting functions, playing media setting volume and such (see Duda par. 0050).
Claim 1 of U.S. Patent, US 10579324 in view of Zavracky in further view of Duda does not disclose, “wherein the wireless transceiver employs a Bluetooth® physical layer with a Bluetooth® proxy to implement a packet switching gateway.”
Agrawal discloses, wherein the wireless transceiver employs a Bluetooth® physical layer with a Bluetooth® proxy to implement a packet switching gateway (Par. 0018, fig. 3, Bluetooth protocol stack. Par. 0027, fig. 3, a top radio access control protocol layer (RAC), which acts as a bridge between the two protocols, as well as a packet router. This layer receives and forwards transmission packets to and from the Bluetooth and IEEE 802.11 networks utilizing a single antenna. Fig. 3, par. 0018-0019 discloses baseband layer (i.e. baseband layer in Bluetooth protocol stack is part of the physical layer of OSI model) defines key procedures that enabled devices to communicate with each other using Bluetooth technology. Par. 0029, Devices which possess the one-chip, dual mode IEEE 802.11 and Bluetooth wireless radio interface chip are referred to as Bluetooth Wireless Gateways (BWGs), i.e. wireless communication is embellished using Bluetooth physical (i.e. baseband layer) layer with a Bluetooth bridge (i.e. proxy) to implement packet routing or packet switching gateway).
Therefore, it would have been obvious to one of ordinary skill in the art to modify Claim 1 of U.S. Patent, US 10579324 in view of Zavracky in further view of Duda with the teachings of the wireless transceiver employs a Bluetooth physical layer with a Bluetooth proxy to implement a packet switching gateway, as taught by Agrawal, to provide Bluetooth device access to outside world such as internet by combining Bluetooth with WLAN by packet switching between Bluetooth and WLAN networks, as disclosed in Agrawal, par. 00004.
Claim Rejections - 35 USC § 103
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
Claims 1, 3-9 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Zavracky, US 20020030649, in view of Kang US 20080032627, in further view of Bengtsson et al., WIPO Pub No. WO 02/15075, in further view of Ishizuka, JP 2002112093 (reference is made to the English translation), in further view of iXBT Labs, “TI's OMAP Processor…”, published 05/12/2006, in further view of Tsai (US 20080218581), in further view of Duda (US 20020186180 ), in further view of Agrawal et al. (US 20080117850, supported by provisional application 60855288).
As to claim 1 Zavracky discloses a method comprising:
transmitting, via a wireless transceiver of a host device, a signal to a near eye, wearable wireless transfer device (¶¶167 and 173 – the head-mounted device 1511 receives wireless signals from a host device);
wherein the near-eye wearable wireless transfer device has a display coupled to an arm (Par. 0160, At least one display pod 1100 is mounted to the horizontal support 1630 (i.e. arm as shown in fig. 29A, 28)), a processor (Par. 1064, fig. 20, CPU 1712), a memory (Par. 0156, fig. 28, Personal Computer Memory Card International Association (PCMCIA) interface module 1554 coupled to the head band 1512. A PCMIA card 1558 is inserted into the PCMCIA interface module 1554), a wireless transceiver (Par. 0157, the communication module 1720 includes a wireless transducer for transmitting and receiving digital audio, video and data signals), an audio input (Attached to one of the earphones is a microphone arm 1690 having a microphone 1559), an audio output (Par. 0158, The earphones 1603a, 1603b),
receiving, via a wireless transceiver of the near eye, wearable wireless transfer device, the signal from the host device (¶167), the signal being of at least Video Graphic Array (VGA) quality (¶¶53-55) and being received over a wireless connection (¶167),
wherein the signal comprises a first channel consisting of audio/video data and a second channel consisting of non-audio/video data (¶¶167-168 – the signal includes audio/visual information and data signals, thus includes two (logical) channels of information); and
generating a video signal, from the signal, that is suitable for handling by a display driver in the near eye, wearable wireless transfer device (¶164 – display driver 1716 generates a video signal for display on panel 1700),
wherein the near eye, wearable wireless transfer device has at least one display configured to be a near eye, wearable display (Fig. 29A-29D), and
wherein an output of the at least one display, generated from the video signal, (¶¶53-54, ¶62, and ¶112), is of at least Video Graphic Array (VGA) quality (¶¶53-55) and is received over the wireless connection (¶167),55).
Zavracky fails to disclose the signal received over a Serial Port Profile (SPP) Bluetooth wireless connection; wherein the signal features auto sequencing and guaranteed packet delivery.
However, in an analogous art, Kang discloses, the signal received over a Serial Port Profile (SPP) Bluetooth wireless connection (Par. 0021, first Bluetooth module 2 and the second Bluetooth module 4 establish a wireless communication link for transferring real-time image data 9 between the host 1 and image module 3 using SPP. The SPP sets up a virtual serial port relying on RFCOMM (Radio Frequency Communication, a serial cable emulation protocol) to replace the original physical connection).
It would have been obvious to a skilled artisan at the time of the invention to modify the system of Zavracky with the teachings of Kang, the rationale being to real-time image data transmitted without physical cable connection. Bluetooth SPP (Serial Port Profile) defines how to set-up virtual serial ports on two devices and connecting these with Bluetooth.
System of Zavracky and Kang teaches, SPP sets up a virtual serial port relying on RFCOMM (Radio Frequency Communication, a serial cable emulation protocol) to replace the original physical connection. However, it does not teach RFCOMM Bluetooth connection wherein the signal features auto sequencing and guaranteed packet delivery.
Bengtsson discloses, Bluetooth connection wherein the signal features auto sequencing and guaranteed packet delivery (col. 13 lines 3-22 – RFCOMM features guaranteed QoS, a form of guaranteeing delivery of at least some of the packets. The L2CAP protocol, upon which RFCOMM is built, features auto sequencing as is known in the art, and as described in the Bluetooth Core Specification v1.0B).
It would have been obvious to a skilled artisan at the time of the invention to modify the system of Zavracky and Kang with the teachings of Bengtsson, the rationale being to provide improved quality of delivery of the content.
The system of Zavracky, Kang and Bengtsson fails to disclose that the signal consists of audio and visual information received over a first radio frequency channel and non-audio and non-visual information received over a second radio frequency channel.
Ishizuka discloses transmitting a wireless Bluetooth signal to a video reception device (Fig. 1: 27), wherein the signal consists of visual information received over a first radio frequency channel and non-audio and non-visual information received over a second radio frequency channel (¶¶42-44 – video data (see also ¶40, which describes that recording the video includes capturing audio, which would suggest to the skilled artisan that audio and video are delivered to the output device) and control information (non-A/V information) are sent via separate Bluetooth transmission channels. A POSA would understand this as a disclosure that the two types of information are delivered using first and second RF channels).
It would have been obvious to a skilled artisan before the effective filing date of the claimed invention to modify the system of Zavracky, Kang and Bengtsson with the teachings of Ishizuka, the rationale being to ensure proper reception of the video data by dedicating a Bluetooth channel to its delivery. As described above, Ishizuka suggests, but does not explicitly disclose, that audio is sent along with the video to the receiving device. However, because the Zavracky device already receives audio for output along with video (see Zavracky ¶167), the combination of these references would suggest sending A/V information via Ishizuka’s first Bluetooth channel and control information via a second channel.
The system of Zavracky, Kang and Bengtsson and Ishizuka fails to disclose that the generating is performed by an ARM DSP processor of the near eye, wearable wireless transceiver device.
However, in an analogous art, iXBT discloses a portable media device that generates videos by an ARM DSP processor of the near eye, wearable wireless transceiver device (pg. 1-2: the OMAP 2420 is an ARM DSP processor for generating video in a portable media device).
It would have been obvious to a skilled artisan at the time of the invention to modify the system of Zavracky, Kang and Bengtsson and Ishizuka with the teachings of iXBT. Clear motivation exists to make this modification, which would enable the HMD of Zavracky to benefit from improved functionality offered by the OMAP 2420. Such motivation includes improved speed and video quality, “enhanced power and performance management through TI's SmartReflex solutions that incorporate a range of intelligent and adaptive hardware and software technologies that dynamically control voltage, frequency and power based on device activity, modes of operation and process and temperature variation.” (see iXBT pg.1-2). For these reasons, the skilled artisan would conclude that modifying the HMD of Zavracky to utilize the OMAP processor described in the iXBT article would result in an improved product.
The system of Zavracky, Kang, Bengtsson, Ishizuka, and iXBT fails to disclose that the ARM DSP processor is configured to open a communication port to the host system to receive the signal, wherein the ARM DSP processor is configured to modify the signal into an audio stream and a video stream.
However, in an analogous art, Tsai discloses, the ARM DSP processor is configured to open a communication port to the host system to receive the signal, wherein the ARM DSP processor is configured to modify the signal into an audio stream and a video stream (Par. 0029, par. 0039, fig. 4 and 7 The audio/video integrated unit 504 is electronically connected to the first memory unit 500 and the second memory 502 for separating at least one audio signal and least one a video signal of a plurality of voice packets via the communication software, fig. 4 and fig. 7 shows receive audio/video communication packet, The audio/video integrated unit 504 includes audio/video processor 5042 that drives audio/video encoder/decoder to separate the audio packet and video packet to be output via audio output and video display unit. Par. 0032, The audio/video processor 5042 is a microprocessor designed by ARM Corporation. Par. 039, the network audio/video communication device receiving at least one audio/video communication packet via the Internet (S200), i.e. as shown in fig. 4, network audio/video communication device diagram where audio/video processor (i.e. ARM processor) is connected to network interface unit 506 to receive audio/video connect to transmitter device or host system (see par. 0041). The limitation “modify the signal into an audio stream and video stream” has been interpreted as, stripping video signal into respective audio and video streams, as disclosed in applicant’s specification par. 0031).
It would have been obvious to a skilled artisan at the time of the invention to modify the system of Zavracky, Kang and Bengtsson, Ishizuka and iXBT with the teachings of Tsai. Clear motivation exists to make this modification, which would enable the HMD of Zavracky to benefit from improved functionality offered of Tsai’s ARM processor to receive and decode audio/video signal in to video and audio to be outputted for user experience. Such motivation includes for supporting MPEG-4, H.264 or higher specifications. (see Tsai par. 0030). For these reasons, the skilled artisan would conclude that modifying the HMD of Zavracky to utilize the ARM processor described in the Tsai to separate audio and video signals for outputting by video display and audio output would result in an improved product.
System of The system of Zavracky, Kang, Bengtsson, Ishizuka, iXBT, and Tsai fails to discloses, wherein the near-eye wearable device has a speech recognition engine.
However, in an analogous art Duda discloses, wherein the near-eye wearable device has a speech recognition engine (Par. 0003-0004, wearable hands free solar powered cap/visor integrated communications and entertainment devices and more particularly to an apparatus that is practically invisible, applies voice recognition and heads up display technology, a head wearable cap or visor resulting in a compact, lightweight, integrated, hands free, manual or voice activated, heads-up (digital) displayed).
It would have been obvious to a skilled artisan at the time of the invention to modify the system of Zavracky, Kang and Bengtsson, Ishizuka and iXBT with the teachings of Duda. Clear motivation exists to make this modification, which would enable the HMD of Zavracky to benefit from voice activated control of operation of displayed content such as scrolling/selecting functions, playing media setting volume and such (see Duda par. 0050). For these reasons, the skilled artisan would conclude that modifying the HMD of Zavracky to utilize the speech recognition feature described in the Duda to control the operation of the HMD unit using voice command to make it hands free operation that would result in an improved product.
System of The system of Zavracky, Kang, Bengtsson, Ishizuka, iXBT, Tsai in further view of Duda fails to disclose, wherein the wireless transceiver employs a Bluetooth® physical layer with a Bluetooth® proxy to implement a packet switching gateway.
However, in an analogous art Agrawal discloses, wherein the wireless transceiver employs a Bluetooth® physical layer with a Bluetooth® proxy to implement a packet switching gateway (Par. 0018, fig. 3, Bluetooth protocol stack. Par. 0027, fig. 3, a top radio access control protocol layer (RAC), which acts as a bridge between the two protocols, as well as a packet router. This layer receives and forwards transmission packets to and from the Bluetooth and IEEE 802.11 networks utilizing a single antenna. Fig. 3, par. 0018-0019 discloses baseband layer (i.e. baseband layer in Bluetooth protocol stack is part of the physical layer of OSI model) defines key procedures that enabled devices to communicate with each other using Bluetooth technology. Par. 0029, Devices which possess the one-chip, dual mode IEEE 802.11 and Bluetooth wireless radio interface chip are referred to as Bluetooth Wireless Gateways (BWGs), i.e. wireless communication is embellished using Bluetooth physical (i.e. baseband layer) layer with a Bluetooth bridge (i.e. proxy) to implement packet routing or packet switching gateway).
It would have been obvious to a skilled artisan at the time of the invention to modify the system of System of The system of Zavracky, Kang, Bengtsson, Ishizuka, iXBT, Tsai, Duda with the teachings of the wireless transceiver employs a Bluetooth physical layer with a Bluetooth proxy to implement a packet switching gateway, as taught by Agrawal, to provide Bluetooth device access to outside world such as internet by combining Bluetooth with WLAN by packet switching between Bluetooth and WLAN networks, as disclosed in Agrawal, par. 00004.
As to claim 3 Zavracky discloses sending the signal and other data over a high speed connection (¶167). Zavracky fails to explicitly disclose that the signal and other data are multiplexed. Official notice is taken that this was widely practiced in the art at the time of the invention, and to include this functionality into the system of Zavracky would have been obvious to the skilled artisan in order to send audio and video contemporaneously in a manner the synchronizes their reception.
As to claim 4 Bengtsson discloses reading an application program from a memory (Fig. 3 and its description).
As to claim 5 Zavracky discloses generating a monaural or stereo audio output (¶156).
As to claim 6 Duda discloses receiving at least one voice command by an audio input of the near eye, wearable wireless transfer device (Par. 0050, The wearer's voice commands would be stored in the electronics modules internal memory, and the wearers voice commands would be received by the built in microphone 8).
As to claim 7 Duda discloses that at least one voice command is used to control, via the speech recognition engine, at least one function of the near eye, wearable wireless transfer device (Par. 0050, voice activated control of operation of displayed content such as scrolling/selecting functions, playing media setting volume and such).
As to claim 8 iXBT discloses that an ARM bus of the ARM DSP processor is configured to send the video signal directly to the display driver of the near eye, wearable wireless transfer device (see circuit diagram – the OMAP 2420 SoC includes internal display drivers. The bus that inherently connects the ARM and display driver is therefore an ARM bus of the processor).
As to claim 9 the system of Zavracky and iXBT discloses that the signal is decompressed (Abstract; ¶77) by the ARM DSP processor to create the video signal (iXBT).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AKSHAY DOSHI whose telephone number is (571)272-2736. The examiner can normally be reached M-F 9:30 AM to 6:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOHN W MILLER can be reached at (571)272-7353. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.D./Examiner, Art Unit 2422
/JOHN W MILLER/Supervisory Patent Examiner, Art Unit 2422