Notice of Pre-AIA or AIA Status
The present application is being examined under the pre-AIA first to invent provisions.
Detailed Action
Claims -57 are presenting for examination.
non-statutory double patenting rejection
The non-statutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A non---statutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on non-statutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a non-statutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-29 are rejected on the ground of non-statutory double patenting as being unpatentable over claims 1-14, and 16-29 of U.S. Patent No. 11,128,710 & over claims 1-14, and 16-29 of U.S. Patent No. 12,010,174 & over claims 1-14, and 16-29 of U.S. Patent No. 12,177,301. Although the claims at issue are not identical, they are not patentably distinct from each other because the patents 710 & 174 & 301 anticipates claims 1-29 as been shown in the table below
18/928,057
11,128,710
12,010,174
12,177,301
(Original) A method for operating multiple actuators in response to captured human voice data, for use with a Wireless Personal Area Network (WPAN), for use with a controlled device that comprises a second actuator, and for use with a client device configured to communicate with the controlled device and with a server device over the WPAN, the method comprising:
capturing, by a microphone in the client device, first and second human voice data;
sending, by the client device to the server device via the WPAN, the captured first and second human voice data;
receiving, by the server device from the client device, the sent captured first and second human voice data;
processing, by the server device, the received captured first and second human voice data;
producing, by the server device, first and second commands in response to the processing;
sending, by the server device to the client device, the first command;
receiving, by the client device from the server device, the sent first command;
operating, by the client device, a first actuator in the client device in response to the received first command;
sending, by the server device to the controlled device, the second command;
receiving, by the controlled device from the server device, the sent second command; operating, by the client device, the second actuator in response to the received second 12. (Original) The method according to claim 10, wherein the client device and the controlled device are in a building, and wherein the server device is external to the building.
1. A method for operating multiple actuators in response to captured human voice data, for use with a client device and a controlled device in a building, each communicating over a wireless network, and for use with an Internet-connected server device external to the building, the method comprising: capturing, by a microphone in the client device, a first human voice data; sending to the server, by the client device via the wireless network, the captured first human voice data; receiving, by the server over the Internet, the captured first human voice data; processing, by the server, the captured first human voice data; responsive to the processing, sending a first message, by the server to the client device over the Internet; receiving, by the client device via the wireless network, the first message; operating a first actuator in the client device in response to the received first message; capturing, by the microphone in the client device, a second human voice data; sending to the server, by the client device via the wireless network, the captured second human voice data; receiving, by the server over the Internet, the captured second human voice data; processing, by the server, the captured second human voice data; responsive to the processing, sending a second message, by the server to the controlled device over the Internet; receiving, by the controlled device via the wireless network, the second message; and operating a second actuator in the controlled device in response to the received second message.
1. A system for operating multiple actuators in response to captured human voice data, for use with a wireless network in a building and with a controlled device that comprises a second actuator in the building, the system comprising: an Internet-connected server device external to the building configured for processing first and second human voice data, and to produce first and second messages respectively in response to the processing; a client device in the building configured to communicate with the controlled device and with the server device over the wireless network; a microphone in the client device for capturing the first and second human voice data; and a first actuator in the client device, wherein the system is operative for sending to the server by the client device via the wireless network, the captured first and second human voice data, and wherein the system is further operative for receiving from the server the first and second messages in response to the sending, to operate the first actuator in the client device in response to the received first message, and to operate the second actuator in the controlled device in response to the received second message.
1. A method for operating multiple actuators in response to captured human voice data, for use with a Wireless Local Area Network (WLAN) network, for use with a controlled device that comprises a second actuator, and for use with a client device configured to communicate with the controlled device and with an Internet-connected server device over the WLAN, the method comprising: capturing, by a microphone in the client device, first and second human voice data; sending, by the client device to the server via the WLAN over the Internet, the captured first and second human voice data; receiving, by the server device from the client device over the Internet, the sent captured first and second human voice data; processing, by the server device, the received captured first and second human voice data; producing, by the server device, first and second commands in response to the processing; sending, by the server device to the client device over the Internet, the first command; receiving, by the client device from the server device, the sent first command; operating, by the client device, a first actuator in the client device in response to the received first command; sending, by the server device to the controlled device over the Internet, the second command; receiving, by the controlled device from the server device, the sent second command; and operating, by the client device, the second actuator in response to the received second command.
2. (Original) The method according to claim 1, wherein the processing comprises performing a voice recognition algorithm for identifying a voice of a specific person.
2. The method according to claim 1, wherein the processing comprises performing a voice recognition algorithm for identifying the voice of a specific person.
2. The system according to claim 1, wherein the processing comprises performing a voice recognition algorithm for identifying the voice of a specific person.
2. The method according to claim 1, wherein the processing comprises performing a voice recognition algorithm for identifying a voice of a specific person.
3.(Original) The
method according to claim 1, further comprising:
producing, by a sensor in the client device, sensor data that responds to a physical phenomenon; and
sending, by the client device to the server device via the WPAN, the sensor data,
wherein the first or second command is sent by the server device in response to the sensor data
3. The method according to claim 1, wherein the client device further comprises a sensor that outputs sensor data that responds to a physical phenomenon, wherein the method further comprising sending to the server, by the client device via the wireless network, the sensor data, and wherein the first message is sent by the server in response to the sensor data.
3. The system according to claim 1, wherein the client device further comprises a sensor that outputs sensor data that responds to a physical phenomenon, wherein the system is further configured for sending to the server, by the client device via the wireless network, the sensor data, and wherein the first message is sent by the server in response to the sensor data.
3. The method according to claim 1, further comprising: producing, by a sensor in the client device, sensor data that responds to a physical phenomenon; and sending, by the client device to the server device via the WLAN over the Internet, the sensor data, wherein the first or second command is sent by the server device in response to the sensor data.
4. (Original) The method according to claim 3, wherein the sensor comprises a thermoelectric sensor that responds to a temperature or to a temperature gradient of an object using conduction, convection, or radiation, or wherein the sensor comprises a photoelectric sensor that responds to a visible or an invisible light or gamma rays.
4. The method according to claim 3, wherein the sensor is a thermoelectric sensor that responds to a temperature or to a temperature gradient of an object using conduction, convection, or radiation, or wherein the sensor is a photoelectric sensor that responds to a visible or an invisible light or gamma rays
4. The system according to claim 3, wherein the sensor is a thermoelectric sensor that responds to a temperature or to a temperature gradient of an object using conduction, convection, or radiation, or wherein the sensor is a photoelectric sensor that responds to a visible or an invisible light or gamma rays.
4. The method according to claim 3, wherein the sensor comprises a thermoelectric sensor that responds to a temperature or to a temperature gradient of an object using conduction, convection, or radiation, or wherein the sensor comprises a photoelectric sensor that responds to a visible or an invisible light or gamma rays.
5. (Original) The method according to claim 1, wherein each of the first and second actuators is configured for directly or indirectly affecting, changing, producing, or creating a physical phenomenon.
5. The method according to claim 1, wherein each of the first and second actuators is directly or indirectly affecting, changing, producing, or creating a physical phenomenon.
5. The system according to claim 1, wherein each of the first and second actuators is configured for directly or indirectly affecting, changing, producing, or creating a physical phenomenon.
5. The method according to claim 1, wherein each of the first and second actuators is configured for directly or indirectly affecting, changing, producing, or creating a physical phenomenon.
6. (Original) The method according to claim 5, wherein the physical phenomenon comprises temperature, humidity, pressure, audio, vibration, light, motion, sound, proximity, flow rate, electrical voltage, electrical current, or any combination thereof.
6. The method according to claim 5, wherein the physical phenomenon comprises temperature, humidity, pressure, audio, vibration, light, motion, sound, proximity, flow rate, electrical voltage, or electrical current.
6. The system according to claim 5, wherein the physical phenomenon comprises temperature, humidity, pressure, audio, vibration, light, motion, sound, proximity, flow rate, electrical voltage, or electrical current.
6. The method according to claim 5, wherein the physical phenomenon comprises temperature, humidity, pressure, audio, vibration, light, motion, sound, proximity, flow rate, electrical voltage, electrical current, or any combination thereof.
7. (Original) The method according to claim 1, wherein the client device comprises multiple microphones, and wherein the capturing comprises capturing, by the multiple microphones in the client device, the first and second human voice data.
7. The method according to claim 1, wherein the client device comprises multiple microphones, and wherein the capturing comprises capturing, by the multiple microphones in the client device, the human voice data
7. The system according to claim 1, wherein the client device comprises multiple microphones, and wherein the capturing comprises capturing, by the multiple microphones in the client device, the human voice data.
7. The method according to claim 1, wherein the client device comprises multiple microphones, and wherein the capturing comprises capturing, by the multiple microphones in the client device, the first and second human voice data.
8. (Original) The method according to claim 7, wherein the multiple microphones are arranged as a directional microphones array operative to estimate a number, magnitude, frequency, Direction-Of-Arrival (DOA), distance, or speed of a phenomenon impinging the microphones array.
8. The method according to claim 7, wherein the multiple microphones are arranged as a directional microphones array operative to estimate a number, magnitude, frequency, Direction-Of-Arrival (DOA), distance, or speed of a phenomenon impinging the microphones array.
8. The system according to claim 7, wherein the multiple microphones are arranged as a directional microphones array operative to estimate a number, magnitude, frequency, Direction-Of-Arrival (DOA), distance, or speed of a phenomenon impinging the microphones array.
8. The method according to claim 7, wherein the multiple microphones are arranged as a directional microphones array operative to estimate a number, magnitude, frequency, Direction-Of-Arrival (DOA), distance, or speed of a phenomenon impinging the microphones array.
9. (Original) The method according to claim 1, wherein the microphone is an omnidirectional, unidirectional, or bidirectional microphone that is based on sensing an incident sound-based motion of a diaphragm or a ribbon, or wherein the microphone comprises a condenser, an electret, a dynamic, a ribbon, a carbon, or a piezoelectric microphone.
9. The method according to claim 1, wherein the microphone is an omnidirectional, unidirectional, or bidirectional microphone that is based on the sensing an incident sound-based motion of a diaphragm or a ribbon, or wherein the microphone comprises a condenser, an electret, a dynamic, a ribbon, a carbon, or a piezoelectric microphone.
9. The system according to claim 1, wherein the microphone is an omnidirectional, unidirectional, or bidirectional microphone that is based on the sensing an incident sound-based motion of a diaphragm or a ribbon, or wherein the microphone comprises a condenser, an electret, a dynamic, a ribbon, a carbon, or a piezoelectric microphone.
9. The method according to claim 1, wherein the microphone is an omnidirectional, unidirectional, or bidirectional microphone that is based on sensing an incident sound-based motion of a diaphragm or a ribbon, or wherein the microphone comprises a condenser, an electret, a dynamic, a ribbon, a carbon, or a piezoelectric microphone.
10. (Original) The method according to claim 1, wherein the client device is addressable in the WPAN or in the Internet using an address stored in a volatile or non-volatile memory for uniquely identifying the respective device in the WPAN.
10. The method according to claim 1, wherein the client device or the controlled device are addressable in the wireless network or the Internet using an address stored in a volatile or non-volatile memory of the respective device for uniquely identifying the respective device in the network.
10. The system according to claim 1, wherein the client device is addressable in the wireless network or the Internet using an address stored in a volatile or non-volatile memory for uniquely identifying the respective device in the network.
10. The method according to claim 1, wherein the client device is addressable in the WLAN or in the Internet using an address stored in a volatile or non-volatile memory for uniquely identifying the respective device in the WLAN.
11. (Original) The method according to claim 10, wherein the address is a Media Access Control (MAC) layer address that is MAC-48, Extended Unique Identifier (EUT) EUI-48, or EUI-64 address type or wherein the address is a layer 3 address and is static or dynamic Internet Protocol (IP) address that is IPv4 or IPv6 type address.
11. The method according to claim 10, wherein the address is a Media Access Control (MAC) layer address that is MAC-48, Extended Unique Identifier (EUI) EUI-48, or EUI-64 address type or wherein the address is a layer 3 address and is static or dynamic Internet Protocol (IP) address that is IPv4 or IPv6 type address.
11. The system according to claim 10, wherein the address is a Media Access Control (MAC) layer address that is MAC-48, Extended Unique Identifier (EUI) EUI-48, or EUI-64 address type or wherein the address is a layer 3 address and is static or dynamic Internet Protocol (IP) address that is IPv4 or IPv6 type address.
11. The method according to claim 10, wherein the address is a Media Access Control (MAC) layer address that is MAC-48, Extended Unique Identifier (EUI) EUI-48, or EUI-64 address type or wherein the address is a layer 3 address and is static or dynamic Internet Protocol (IP) address that is IPv4 or IPv6 type address.
13. (Original) The method according to claim 1, wherein the WPAN is according to, or base on, Bluetooth TM or Institute of Electrical and Electronics Engineers (IEEE) 802.15.1-2005 standard.
12. The method according to claim 1, wherein the wireless network is a Wireless Personal Area Network (WPAN), that is according to, or based on, Bluetooth™ or Institute of Electrical and Electronics Engineers (IEEE) 802.15.1-2005 standards, or wherein the WPAN is a wireless control network that is according to, or based on, Zigbee™, IEEE 802.15.4-2003, or Z-Wave™ standards.
12. The system according to claim 1, wherein the wireless network is a Wireless Personal Area Network (WPAN), that is according to, or based on, Bluetooth™ or Institute of Electrical and Electronics Engineers (IEEE) 802.15.1-2005 standards, or wherein the WPAN is a wireless control network that is according to, or based on, Zigbee™, IEEE 802.15.4-2003, or Z-Wave™ standards.
13. The method according to claim 1, wherein the WLAN is according to, or base on, IEEE 802.11-2012, IEEE 802.11a, IEEE 802.11b, Institute of Electrical and Electronics Engineers (IEEE) IEEE 802.11g, IEEE 802.11n, or IEEE 802.11ac.
14. (Original) The method according to claim 1, wherein the WPAN is according to, or base on, Zigbee TM, IEEE 802.15.4-2003, or Z- Wave TM standard.
13. The method according to claim 1, wherein the wireless network is a Wireless Local Area Network (WLAN) that is according to, or base on, IEEE 802.11-2012, IEEE 802.11a, IEEE 802.11b, Institute of Electrical and Electronics Engineers (IEEE) IEEE 802.11g, IEEE 802.11n, or IEEE 802.11ac.
13. The system according to claim 1, wherein the wireless network is a Wireless Local Area Network (WLAN) that is according to, or base on, IEEE 802.11-2012, IEEE 802.11a, IEEE 802.11b, Institute of Electrical and Electronics Engineers (IEEE) IEEE 802.11g, IEEE 802.11n, or IEEE 802.11ac.
13. The method according to claim 1, wherein the WLAN is according to, or base on, IEEE 802.11-2012, IEEE 802.11a, IEEE 802.11b, Institute of Electrical and Electronics Engineers (IEEE) IEEE 802.11g, IEEE 802.11n, or IEEE 802.11ac.
15. (Original) The method according to claim 1, wherein the WPAN uses a wireless communication over an unlicensed radio frequency band.
14. The method according to claim 1, wherein the wireless network uses a wireless communication over a licensed or an unlicensed radio frequency band, that is an Industrial, Scientific and Medical (ISM) radio band.
14. The system according to claim 1, wherein the wireless network uses a wireless communication over a licensed or an unlicensed radio frequency band, that is an Industrial, Scientific and Medical (ISM) radio band.
14. The method according to claim 1, wherein the WLAN uses a wireless communication over an unlicensed radio frequency band, that is an Industrial, Scientific and Medical (ISM) radio band.
16. (Original) The method according to claim 1, wherein the controlled device is integrated in, is part of, or is entirely included in, a household appliance having a primary function.
16. The method according to claim 1, wherein the controlled device is integrated in, is part of, or is entirely included in, an appliance.
16. The system according to claim 1, wherein the controlled device is integrated in, is part of, or is entirely included in, a household appliance having a primary function.
16. The method according to claim 1, wherein the controlled device is integrated in, is part of, or is entirely included in, a household appliance having a primary function.
17. (Original) The method according to claim 16, wherein the primary functionality of the appliance is associated with food storage, handling, or preparation.
17. The method according to claim 16, wherein the primary functionality of the appliance is associated with food storage, handling, or preparation.
17. The system according to claim 16, wherein the primary functionality of the appliance is associated with food storage, handling, or preparation.
17. The method according to claim 16, wherein the primary functionality of the appliance is associated with food storage, handling, or preparation.
18. (Original) The method according to claim 17, wherein the primary function of the appliance is heating food, and wherein the appliance is a microwave oven, an electric mixer, a stove, an oven, or an induction cooker.
18. The method according to claim 17, wherein the primary function of the appliance is heating food, and wherein the appliance is a microwave oven, an electric mixer, a stove, an oven, or an induction cooker.
18. The system according to claim 17, wherein the primary function of the appliance is heating food, and wherein the appliance is a microwave oven, an electric mixer, a stove, an oven, or an induction cooker.
18. The method according to claim 17, wherein the primary function of the appliance is heating food, and wherein the appliance is a microwave oven, an electric mixer, a stove, an oven, or an induction cooker.
19. (Original) The method according to claim 17, wherein the appliance is a refrigerator, a freezer, a food processor, a dishwasher, a food blender, a beverage maker, a coffeemaker, or an iced-tea maker.
19. The method according to claim 17, wherein the appliance is a refrigerator, a freezer, a food processor, a dishwasher, a food blender, a beverage maker, a coffeemaker, or an iced-tea maker.
19. The system according to claim 17, wherein the appliance is a refrigerator, a freezer, a food processor, a dishwasher, a food blender, a beverage maker, a coffeemaker, or an iced-tea maker.
19. The method according to claim 17, wherein the appliance is a refrigerator, a freezer, a food processor, a dishwasher, a food blender, a beverage maker, a coffeemaker, or an iced-tea maker.
20. (Original) The method according to claim 16, wherein the primary function of the appliance is associated with environmental control, and the appliance is part of a Heating, Ventilation and Air Conditioning (HVAC) system.
20. The method according to claim 16, wherein the primary function of the appliance is associated with environmental control, and the appliance is part of an Heating, Ventilation and Air Conditioning (HVAC) system.
20. The system according to claim 16, wherein the primary function of the appliance is associated with environmental control, and the appliance is part of a Heating, Ventilation and Air Conditioning (HVAC) system.
20. The method according to claim 16, wherein the primary function of the appliance is associated with environmental control, and the appliance is part of a Heating, Ventilation and Air Conditioning (HVAC) system
21. (Original) The method according to claim 20, wherein the primary function of the appliance is associated with temperature control, and wherein the appliance is an air conditioner or a heater.
21. The method according to claim 20, wherein the primary function of the appliance is associated with temperature control, and wherein the appliance is an air conditioner or a heater.
21. The system according to claim 20, wherein the primary function of the appliance is associated with temperature control, and wherein the appliance is an air conditioner or a heater.
21. The method according to claim 20, wherein the primary function of the appliance is associated with temperature control, and wherein the appliance is an air conditioner or a heater.
22. (Original) The method according to claim 16, wherein the primary function of the appliance is associated with cleaning, wherein the appliance primary function is associated with clothes cleaning and the appliance is a washing machine, or wherein the appliance is a vacuum cleaner.
22. The method according to claim 16, wherein the primary function of the appliance is associated with cleaning, wherein the appliance primary function is associated with clothes cleaning and the appliance is a washing machine, or wherein the appliance is a vacuum cleaner.
22. The system according to claim 16, wherein the primary function of the appliance is associated with cleaning, wherein the appliance primary function is associated with clothes cleaning and the appliance is a washing machine, or wherein the appliance is a vacuum cleaner.
22. The method according to claim 16, wherein the primary function of the appliance is associated with cleaning, wherein the appliance primary function is associated with clothes cleaning and the appliance is a washing machine, or wherein the appliance is a vacuum cleaner.
23. (Original) The method according to claim 16, wherein the appliance is an answering machine, a telephone set, a home cinema system, a High Fidelity (HiFi) system, a Compact Disc (CD) or Digital Video Disc (DVD) player, an electric furnace, a trash compactor, a smoke detector, a light fixture, or a dehumidifier.
23. The method according to claim 16, wherein the appliance is an answering machine, a telephone set, a home cinema system, a High Fidelity (HiFi) system, a Compact Disc (CD) or Digital Video Disc (DVD) player, an electric furnace, a trash compactor, a smoke detector, a light fixture, or a dehumidifier.
23. The system according to claim 16, wherein the appliance is an answering machine, a telephone set, a home cinema system, a High Fidelity (HiFi) system, a Compact Disc (CD) or Digital Video Disc (DVD) player, an electric furnace, a trash compactor, a smoke detector, a light fixture, or a dehumidifier.
23. The method according to claim 16, wherein the appliance is an answering machine, a telephone set, a home cinema system, a High Fidelity (HiFi) system, a Compact Disc (CD) or Digital Video Disc (DVD) player, an electric furnace, a trash compactor, a smoke detector, a light fixture, or a dehumidifier.
24. (Original) The method according to claim 1, wherein the first actuator is an electric light source for converting electrical energy into light that emits visible or non-visible light for illumination or indication, and the non-visible light is infrared, ultraviolet, X-rays, or gamma rays.
24. The method according to claim 1, wherein the first actuator is an electric light source for converting electrical energy into light that emits visible or non-visible light for illumination or indication, and the non-visible light is infrared, ultraviolet, X-rays, or gamma rays.
24. The system according to claim 1, wherein the first actuator is an electric light source for converting electrical energy into light that emits visible or non-visible light for illumination or indication, and the non-visible light is infrared, ultraviolet, X-rays, or gamma rays.
24. The method according to claim 1, wherein the first actuator is an electric light source for converting electrical energy into light that emits visible or non-visible light for illumination or indication, and the non-visible light is infrared, ultraviolet, X-rays, or gamma rays.
25. (Original) The method according to claim 24, wherein the electric light source comprises a lamp, an incandescent lamp, a gas discharge lamp, a fluorescent lamp, a Solid-State Lighting (SSL), a Light Emitting Diode (LED), an Organic LED (OLED), a polymer LED (PLED), or a laser diode.
25. The method according to claim 24, wherein the electric light source comprises a lamp, an incandescent lamp, a gas discharge lamp, a fluorescent lamp, a Solid-State Lighting (SSL), a Light Emitting Diode (LED), an Organic LED (OLED), a polymer LED (PLED), or a laser diode.
25. The system according to claim 24, wherein the electric light source comprises a lamp, an incandescent lamp, a gas discharge lamp, a fluorescent lamp, a Solid-State Lighting (SSL), a Light Emitting Diode (LED), an Organic LED (OLED), a polymer LED (PLED), or a laser diode.
25. The method according to claim 24, wherein the electric light source comprises a lamp, an incandescent lamp, a gas discharge lamp, a fluorescent lamp, a Solid-State Lighting (SSL), a Light Emitting Diode (LED), an Organic LED (OLED), a polymer LED (PLED), or a laser diode.
26. (Original) The method according to claim 1, wherein the first actuator is a motion actuator that causes linear or rotary motion.
26. The method according to claim 1, wherein the first actuator is a motion actuator that causes linear or rotary motion.
26. The system according to claim 1, wherein the first actuator is a motion actuator that causes linear or rotary motion.
26. The method according to claim 1, wherein the first actuator is a motion actuator that causes linear or rotary motion.
27. (Original) The method according to claim 1, wherein the first actuator is a sounder for converting an electrical energy to omnidirectional, unidirectional, or bidirectional pattern emitted, audible or inaudible, sound waves.
27. The method according to claim 1, wherein the first actuator is a sounder for converting an electrical energy to omnidirectional, unidirectional, or bidirectional pattern emitted, audible or inaudible, sound waves.
27. The system according to claim 1, wherein the first actuator is a sounder for converting an electrical energy to omnidirectional, unidirectional, or bidirectional pattern emitted, audible or inaudible, sound waves.
27. The method according to claim 1, wherein the first actuator is a sounder for converting an electrical energy to omnidirectional, unidirectional, or bidirectional pattern emitted, audible or inaudible, sound waves.
28. (Original) The method according to claim 27, wherein the sounder comprises an electromagnetic loudspeaker, a piezoelectric speaker, an electrostatic loudspeaker (ESL), a ribbon or planar magnetic loudspeaker, or a bending wave loudspeaker, or wherein the sounder comprises an electric bell, a buzzer, a chime, a whistle, or a ringer.
28. The method according to claim 27, wherein the sounder comprises an electromagnetic loudspeaker, a piezoelectric speaker, an electrostatic loudspeaker (ESL), a ribbon or planar magnetic loudspeaker, or a bending wave loudspeaker, or wherein the sounder comprises an electric bell, a buzzer, a chime, a whistle, or a ringer.
28. The system according to claim 27, wherein the sounder comprises an electromagnetic loudspeaker, a piezoelectric speaker, an electrostatic loudspeaker (ESL), a ribbon or planar magnetic loudspeaker, or a bending wave loudspeaker, or wherein the sounder comprises an electric bell, a buzzer, a chime, a whistle, or a ringer.
28. The method according to claim 27, wherein the sounder comprises an electromagnetic loudspeaker, a piezoelectric speaker, an electrostatic loudspeaker (ESL), a ribbon or planar magnetic loudspeaker, or a bending wave loudspeaker, or wherein the sounder comprises an electric bell, a buzzer, a chime, a whistle, or a ringer.
29. (Original) The method according to claim 27, wherein the operating of the first actuator comprises playing digital audio content that is pre-recorded or synthesized, or wherein the operating of the first actuator comprises simulating a voice of a human being or generating music, or wherein the operating of the first actuator comprises sounding a syllable, a word, a phrase, a sentence, a short story, or a long story, using male or female voice
29. The method according to claim 27, wherein the operating of the first actuator comprises playing digital audio content that is pre-recorded or synthesized, or wherein the operating of the first actuator comprises simulating the voice of a human being or generating music, or wherein the operating of the first actuator comprises sounding a syllable, a word, a phrase, a sentence, a short story, or a long story, using male or female voice.
29. The system according to claim 27, wherein the operating of the first actuator comprises playing digital audio content that is pre-recorded or synthesized, or wherein the operating of the first actuator comprises simulating the voice of a human being or generating music, or wherein the operating of the first actuator comprises sounding a syllable, a word, a phrase, a sentence, a short story, or a long story, using male or female voice.
29. The method according to claim 27, wherein the operating of the first actuator comprises playing digital audio content that is pre-recorded or synthesized, or wherein the operating of the first actuator comprises simulating a voice of a human being or generating music, or wherein the operating of the first actuator comprises sounding a syllable, a word, a phrase, a sentence, a short story, or a long story, using male or female voice.
Claims 30-57 are rejected on the ground of non-statutory double patenting as being unpatentable over claims 1-14 & 17, and 25-33 of U.S. Patent No. 12,401,721. Although the claims at issue are not identical, they are not patentably distinct from each other because the patent 721 anticipates claims 30-57 as been shown in the table below.
18/928,057
12,401,721
30. (New) A first device for use with a Wireless Personal Area Network (WPAN) in a building, the first device comprising:
a WPAN transceiver for communicating over the WPAN; a microphone for capturing human voice data; and a sounder for converting an electrical energy to audible sound waves; and one or more processors programmed with computer program instructions that, when executed, cause the first device to: capture, using the microphone, first and second human voice data; send, over the Internet via the WPAN, to a server device, the captured first and second human voice data; receive, from the server device via the WPAN over the Internet, first and second messages, in response to the sending of the respectively captured first and second human voice data; sound, using the sounder, digital audio content in response to the received first message; and send, to a second device over the WPAN, a control message that comprises a control data for controlling a first actuator in the second device to directly or indirectly affect, change, or produce, a first physical phenomenon, in response to the received second message, wherein each of the first and second devices is addressable in the WPAN and in the Internet using a respective Internet Protocol (IP) address.
45. (New) The first device according to claim 30, further configured for communicating over a Wireless Local Area Network (WLAN) .
1. A first device for use with a Wireless Local Area Network (WLAN) in a building, the first device comprising: a WLAN transceiver for communicating over the WLAN; a microphone for capturing human voice data; and a sounder for converting an electrical energy to audible sound waves; and one or more processors programmed with computer program instructions that, when executed, cause the first device to: capture, using the microphone, first and second human voice data; send, over the Internet via the WLAN, to a server device, the captured first and second human voice data; receive, from the server device via the WLAN over the Internet, first and second messages, in response to the sending of the respectively captured first and second human voice data; sound, using the sounder, digital audio content in response to the received first message; and send, to a second device over the WLAN, a control message that comprises a control data for controlling a first actuator in the second device to directly or indirectly affect, change, or produce, a first physical phenomenon, in response to the received second message, wherein each of the first and second devices is addressable in the WLAN and in the Internet using a respective Internet Protocol (IP) address.
31. (New) The first device according to claim 30, wherein the first device or the server device is configured for processing the first and second human voice data using a voice recognition algorithm for identifying a voice of a specific person.
2. The first device according to claim 1, wherein the first device or the server device is configured for processing the first and second human voice data using a voice recognition algorithm for identifying a voice of a specific person.
32. (New) The first device according to claim 30, further comprising a sensor that outputs sensor data that responds to a physical phenomenon, wherein the computer program instructions, when executed, further cause the first device to send to the server device, via the WPAN over the Internet, the sensor data, and wherein the first message is further responsive to the sent sensor data.
3. The first device according to claim 1, further comprising a sensor that outputs sensor data that responds to a physical phenomenon, wherein the computer program instructions, when executed, further cause the first device to send to the server device, via the WLAN over the Internet, the sensor data, and wherein the first message is further responsive to the sent sensor data.
33. (New) The first device according to claim 32, wherein the sensor is a thermoelectric sensor that responds to a temperature or to a temperature gradient of an object using conduction, convection, or radiation, or wherein the sensor is a photoelectric sensor that responds to a visible or an invisible light or gamma rays.
4. The first device according to claim 3, wherein the sensor is a thermoelectric sensor that responds to a temperature or to a temperature gradient of an object using conduction, convection, or radiation, or wherein the sensor is a photoelectric sensor that responds to a visible or an invisible light or gamma rays.
34. (New) The first device according to claim 30, further comprising a second actuator that is configured for directly or indirectly affect, change, or produce, a second physical phenomenon.
5. The first device according to claim 1, further comprising a second actuator that is configured for directly or indirectly affect, change, or produce, a second physical phenomenon.
35. (New) The first device according to claim 34, wherein the second physical phenomenon comprises temperature, humidity, pressure, audio, vibration, light, motion, sound, proximity, flow rate, electrical voltage, or electrical current.
6. The first device according to claim 5, wherein the second physical phenomenon comprises temperature, humidity, pressure, audio, vibration, light, motion, sound, proximity, flow rate, electrical voltage, or electrical current.
36. (New) The first device according to claim 30, further comprising an additional microphone for capturing human voice data, and wherein the capturing comprises capturing, by the additional microphone, the first and second human voice data.
7. The first device according to claim 1, further comprising an additional microphone for capturing human voice data, and wherein the capturing comprises capturing, by the additional microphone, the first and second human voice data.
37. (New) The first device according to claim 36, wherein the microphone and the additional microphone are arranged as a directional microphones array operative to estimate a number, magnitude, frequency, Direction-Of-Arrival (DOA), distance, or speed of a phenomenon impinging the microphones array.
8. The first device according to claim 7, wherein the microphone and the additional microphone are arranged as a directional microphones array operative to estimate a number, magnitude, frequency, Direction-Of-Arrival (DOA), distance, or speed of a phenomenon impinging the microphones array.
38. (New) The first device according to claim 30, wherein the microphone is an omnidirectional, unidirectional, or bidirectional microphone that is based on the sensing an incident sound-based motion of a diaphragm or a ribbon, or wherein the microphone comprises a condenser, an electret, a dynamic, a ribbon, a carbon, or a piezoelectric microphone.
9. The first device according to claim 1, wherein the microphone is an omnidirectional, unidirectional, or bidirectional microphone that is based on the sensing an incident sound-based motion of a diaphragm or a ribbon, or wherein the microphone comprises a condenser, an electret, a dynamic, a ribbon, a carbon, or a piezoelectric microphone.
39. (New) The first device according to claim 30, wherein the IP address is stored in a volatile or non-volatile memory.
10. The first device according to claim 1, wherein the IP address is stored in a volatile or non-volatile memory.
40. (New) The first device according to claim 30, further addressed by a Media Access Control (MAC) layer address that is MAC-48, Extended Unique Identifier (EUI) EUI-48, or EUI-64 address type.
11. The first device according to claim 1, further addressed by a Media Access Control (MAC) layer address that is MAC-48, Extended Unique Identifier (EUI) EUI-48, or EUI-64 address type.
41. (New) The first device according to claim 30, wherein the IP address is a static or dynamic Internet Protocol (IP) address that is IPv4 or IPv6 type address.
12. The first device according to claim 1, wherein the IP address is a static or dynamic Internet Protocol (IP) address that is IPv4 or IPv6 type address.
42. (New) The first device according to claim 30, wherein the WPAN is according to, or base on, Bluetooth M or Institute of Electrical and Electronics Engineers (IEEE) 802.15.1-2005 standard.
43. (New) The first device according to claim 30, wherein the WPAN is according to, or base on, Zigbee", IEEE 802.15.4-2003, or Z-Wave TM standard.
46. (New) The first device according to claim 45, wherein the WLAN is according to, or is compatible with, an Institute of Electrical and Electronics Engineers (IEEE) 802.11-2012, IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, or IEEE 802.11ac standard
13. The first device according to claim 1, wherein the WLAN is according to, or is compatible with, an Institute of Electrical and Electronics Engineers (IEEE) 802.11-2012, IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, or IEEE 802.11ac standard.
44. (New) The first device according to claim 30, wherein the WPAN uses a wireless communication over an unlicensed radio frequency band.
47. (New) The first device according to claim 45, wherein the WLAN uses an unlicensed radio frequency band, that is an Industrial, Scientific and Medical (ISM) radio band.
14. The first device according to claim 1, wherein the WLAN uses an unlicensed radio frequency band, that is an Industrial, Scientific and Medical (ISM) radio band.
48. (New) The first device according to claim 30, wherein the second device is integrated in, is part of, or is entirely included in, a household appliance having a primary function.
17. The first device according to claim 1, wherein the second device is integrated in, is part of, or is entirely included in, a household appliance having a primary function.
49. (New) The first device according to claim 30, further configured to operate within a building, and wherein the server device is external to the building.
25. The first device according to claim 1, further configured to operate within a building, and wherein the server device is external to the building.
50. (New) The first device according to claim 30, further comprising an electric light source for converting electrical energy into light that emits visible or non-visible light for illumination or indication, and the non-visible light is infrared, ultraviolet, X-rays, or gamma rays.
26. The first device according to claim 1, further comprising an electric light source for converting electrical energy into light that emits visible or non-visible light for illumination or indication, and the non-visible light is infrared, ultraviolet, X-rays, or gamma rays.
51. (New) The first device according to claim 50, wherein the electric light source comprises a lamp, an incandescent lamp, a gas discharge lamp, a fluorescent lamp, a Solid-State Lighting (SSL), a Light Emitting Diode (LED), an Organic LED (OLED), a polymer LED (PLED), or a laser diode.
27. The first device according to claim 26, wherein the electric light source comprises a lamp, an incandescent lamp, a gas discharge lamp, a fluorescent lamp, a Solid-State Lighting (SSL), a Light Emitting Diode (LED), an Organic LED (OLED), a polymer LED (PLED), or a laser diode.
52. (New) The first device according to claim 30, wherein the sounder comprises an electromagnetic loudspeaker, a piezoelectric speaker, an electrostatic loudspeaker (ESL), a ribbon or planar magnetic loudspeaker, or a bending wave loudspeaker.
29. The first device according to claim 1, wherein the sounder comprises an electromagnetic loudspeaker, a piezoelectric speaker, an electrostatic loudspeaker (ESL), a ribbon or planar magnetic loudspeaker, or a bending wave loudspeaker.
53. (New) The first device according to claim 30, wherein the sounder comprises an electric bell, a buzzer, a chime, a whistle, or a ringer.
30. The first device according to claim 1, wherein the sounder comprises an electric bell, a buzzer, a chime, a whistle, or a ringer.
54. (New) The first device according to claim 53, wherein the sounding comprises playing digital audio content that is pre- recorded or synthesized, wherein the sounding comprises simulating the voice of a human being or generating music, or wherein the sounding comprises sounding a syllable, a word, a phrase, a sentence, a short story, or a long story, using male or female voice.
28. The first device according to claim 27, wherein the sounding comprises playing digital audio content that is pre-recorded or synthesized, wherein the sounding comprises simulating the voice of a human being or generating music, or wherein the sounding comprises sounding a syllable, a word, a phrase, a sentence, a short story, or a long story, using male or female voice.
55. (New) The first device according to claim 30, further comprising an enclosure that houses the WPAN transceiver, the sounder, the microphone, and the one or more processors.
56. (New) The first device according to claim 55, wherein the enclosure is a wearable enclosure configured to be wearable on a human body.
31. The first device according to claim 1, further comprising an enclosure that houses the WLAN transceiver, the sounder, the microphone, and the one or more processors.
32. The first device according to claim 31, wherein the enclosure is a wearable enclosure configured to be wearable on a human body.
57. (New) The first device according to claim 30, wherein the sounder is configured for converting an electrical energy to omnidirectional, unidirectional, or bidirectional pattern emitted, audible or inaudible, sound waves.
33. The first device according to claim 1, wherein the sounder is configured for converting an electrical energy to omnidirectional, unidirectional, or bidirectional pattern emitted, audible or inaudible, sound waves.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Moustafa M Meky whose telephone number is (571)272-4005. The examiner can normally be reached Monday-Friday 9AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ario Etienne can be reached at 571-272-4001. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
MOUSTAFA M. MEKY
Primary Patent Examiner
Art Unit 2457
/MOUSTAFA M MEKY/Primary Examiner, Art Unit 2457
03/05/2026